We began by discussing possible interactions between higher-order and lower-order justification. One question is: can higher-order evidence (regarding how much justification you have to believe p) have downward effects, that is, contribute to or make you have less (or more) justification for the first-order belief p? The stock example here is that you've done some first-order, mathematical reasoning that seems to (and let's suppose, really does) support p, but then you get evidence that you've been drugged in a way that makes you bad at math in ways that it's hard for you while drugged to otherwise detect.
Williamson's clock is a case where you can know certain imprecise facts about the position of a pointer on an unmarked clock face. We suppose you can't know the precise position, because your belief is not sufficiently "safe" to count as knowledge (you'd still have that belief were the pointer in an ever-so-slightly different position). But you can know that the pointer is somewhere in region M, since when the pointer is in the middle of that region, the nearby worlds where the pointer's position differs but you still have that belief would be ones where it's elsewhere in the same region, and so the belief is still true. However, can you know that you know the pointer is somewhere in region M? If we suppose that M is the smallest region where it's possible for you to have the first-order perceptual knowledge that the pointer is in M, then that knowledge would only be possessed in the case where the pointer is exactly in the middle of M. So to know that you know the pointer is in M, arguably you'd have to know that the pointer is exactly in the middle of M. (At least, you'd be brought to this conclusion if you reasoned in the way we're doing here, and accepted the premise that you did know the pointer was in M.) Since you can't know precise facts like that about the pointer's position, it seems you can't know that you know the pointer is in M, though the result still stands that you can have the mere first-order knowledge, that the pointer is in M. Williamson goes on to argue that, not only is Kp consistent with ¬KKp, it's consistent with your knowing that it's very unlikely (indeed, more or less arbitrarily unlikely) that you Kp. So even a large amount of J(¬Kp) does not entail ¬Kp. (Here is the Williamson paper, and here is a follow-up.)
Williamson's discussion focuses on knowledge, but similar reflections might tempt one to insist that Jp is also consistent with your knowing (or being justified in believing) that it's unlikely that you're Jp. If correct, that would show that J(¬Jp) does not entail ¬Jp, either. Other strong bridge principles between higher-order and first-order justification might be similarly challenged. The only principle I'm sympathetic to in this neighborhood is the weak claim that J(¬Jp) constitutes some negative justification towards p. It tends to disconfirm p to some extent. Gaining that higher-order justification contributes towards your being less justified in believing p than you'd be if you lacked it.
We might also have sympathy for the idea that higher-order justification can have positive downward effects. For example, JJp may give you some positive justification towards p. David Barnett has an interesting case that challenges this idea, or at least demands that the idea be refined. (We didn't get to talk about this in seminar.) David's case is this: You have a body of first-order evidence about a murder, and are not certain about what conclusions it supports. Some oracle (Sherlock Holmes, or another source of trustworthy information about your epistemic status) surveys your evidence and tells you, "Actually, some of your evidence has the property of decisively supporting conclusion q but as a matter of fact I know independently that the evidence in question is false." That is, the oracle is telling you you have some false but justified belief (the belief is not specifically identified) that confirms the conclusion q. The oracle does not tell you whether q is in fact true, or whether q is supported by the oracle's larger body of evidence, nor even what conclusions you should draw after receiving the information he gave you. Many people David has told this case to (myself included) judge that the agent is not in this case more justified in believing q than she was before talking to the oracle. Yet it appears she did acquire further justification, that she earlier lacked, to believe she was justified in believing q. So here it seems like JJq does not straightforwardly entail getting (even some more) justification for q.
We've been discussing the question whether higher-order justification can have downward impacts (whether positive or negative), and thus affect how much justification you have for your first-order beliefs. A different question is whether the first-order justification can have upward impacts. Later, we'll see many participants in the disagreement literature effectively saying "no," that would license objectionably question-begging demotions of your peers (when you happen to be the one who gets things right). But Kelly will argue that the answer should be "yes," and my sympathies are aligned with his.
A question in the vicinity of these is what is the epistemic situation of an agent who merely seems to be in the position of understanding a mathematical proof of p. Perhaps it's not really a proof, or perhaps it is but the agent's understanding of it is flawed in ways she's not aware of. Yet in some sense, it feels to the agent just like she's seen p proved. In many such cases the agent will have some inductive grounds for thinking that in such cases, p probably has been proved. (Or depending on her track record, maybe she'll have inductive grounds for believing the opposite!) Those inductive grounds aside, though, does anything else about the agent's situation contribute justification towards the first-order question p, or the second-order question whether she's justified in believing p? I'd resist attempts to conflate her epistemic position with that of a subject who really has proved p, or seen and understood a proof of p. At the same time, I'm reluctant to think that for the agent we're considering, p has nothing justificatorily to be said for it (beyond the inductive grounds). These are hard issues, that I don't have a worked-out view about.
Next we turned to a discussion of epistemic conflicts/dilemmas/tragedies. I proposed to use the term "incoherent" as a broad, not-precisely-defined label covering cases like inconsistent beliefs, or a belief in p together with a deliberate refraining from believing an obvious consequence of p, or attitudes that are epistemically "akratic," such as believing p while believing you lack justification to do so, or refraining from believing p while believing your justification obliges you to believe p. I also use this label to cover cases like probabilistically incoherent credences. We discussed some ways in which the label "incoherent" may mislead for this usage, and some different ways that others use the term. It's not stipulated to be part of my usage that incoherent attitudes must always be unjustified; that's a further substantive claim (one that I happen to reject).
I advocated the view that sometimes the "least bad" (or "most permissible") doxastic response to a body of evidence may be to have attitudes that are incoherent in this sense. For example, the least bad response to Preface cases may be to have inconsistent beliefs. The least bad response to knowing that some claim is either tautologous or contradictory, but not knowing which, may be to have an intermediate credence, though then your credences would be incoherent. And so on. I say "least bad" to leave it an open, unsettled question whether we should say that (in at least some such cases) the incoherent set of attitudes can be outright justified: whether they can be postively reasonable ones for you to have. Or, on the other hand, whether they (in at least some such cases) exhibit a normative dilemma/tragedy: that is, a case where any doxastic choice you make is bound to be objectionable or justificatorily defective, and so no response is outright justified. (This is not the same as skepticism; the skeptic will say that the response of suspending judgment is outright justified.)
One reaction to this proposal might be: "You shouldn't be satisfied with the incoherent set of attitudes, you should reflect harder and figure out which attitudes are least supported by your evidence and change them." Perhaps you should! But what review processes you should add to your mental to-do list is one question; and what doxastic responses you're justified in forming here-and-now, on the basis of the insights you've already had (or should have had) is a different question; and I'm only intending to theorize about the latter.
Another reaction might be: "In such cases, why don't you you just suspend all the beliefs, then you won't any longer have incoherent attitudes?" Well, it's not clear on some probabilistic models what such suspending amounts to, and in particular not clear whether suspending would preserve coherence. But even bracketing those concerns, it's just wishful thinking to think that suspension will always be the way out of these difficulties. Consider this case. Mathematicians have good but inconclusive reasons to believe that P < NP.
This means that the problems of complexity class P (solvable by a deterministic machine in time proportional to a polynomial of the input size) is a proper subset of the problems of complexity class NP (solvable by a non-deterministic machine in polynomial time, or alternatively, such that solutions can be verified by a deterministic machine in polynomial time).
Now suppose you allegedly prove that some of the arguments mathematicians have for believing P < NP are flawed, apparently supporting the all-things-considered response that we should suspend judgment whether P < NP. But now you get evidence that you were under the influence of a "bad at math" drug when proving what you did. In this case, perhaps you should suspend judgment about some questions (such as whether your proof was correct), but it seems that the question at issue --- whether P < NP --- is one you're not anymore justified in suspending judgment about.
Dilemmas/tragedies don't mean that anything goes: some options may be decisively off the table. Neither do they mean that all of the unexcluded candidates are equally bad. Some option(s) may be "least bad"; it's just that all options, including the "least bad" ones, still include some "unexcused" normative defect. (I don't like using the word "unexcused" here, since the issues isn't primarily about blameworthiness. But I don't know what word to use in its place.)
If some option is least bad, then isn't it guaranteed to be justified? Isn't it always rational to take the best option that's available? I don't know. That's a substantive normative claim. Many are sympathetic to it, but it's intelligibly deniable. If you did accept that claim, then yeah, you wouldn't think that dilemmas/tragedies of the sort I'm envisaging are possible.
I don't think that incomparability by itself generates a dilemma/tragedy of the sort I'm envisaging. It may just entail that the justificatory facts are indeterminate. (Or perhaps that justificatory statuses aren't linearly orderable.)
Arguably, dilemmas may include cases where your total evidence not only jointly recommends some attitudes that can't coherently be simultaneously adopted, but also stances that can't in fact be simultaneously adopted. (For example: {Believe p!, Believe q!, Don't both believe p and believe q!})
During the discussion, I identified a couple of thoughts that tend to go unquestioned (and usually aren't even explicitly articulated, though they are relied on) in epistemology, but that I think are in fact quite substantive and intelligibly disputable. Thought 1 is about the Guaranteed Existence of some appropriate doxastic response. (There is an explicit debate about the Guaranteed Uniqueness of such a response.) That is, for any body of evidence, will there always be at least one doxastic response which that evidence justifies? The answer to this question will presumably depend on what menu of doxastic responses you acknowledge.
A thought related to this (call it Thought 1*) is that if some given doxastic response (such as suspending judgment) has its justification defeated, then some other response(s) must get their justification strengthened. If Thought 1 sometimes fails, presumably Thought 1* can also fail.
A different idea is Thought 2: that if some attitudes are "incoherent" relative to each other, then your evidence doesn't justify the joint response of holding all those attitudes simulataneously. Depending on what view we take about reasonable incoherence, and/or about dilemmas/tragedies, we may want to reject or revise this principle.
Imagine a case where justification for p (or if you prefer, a justified belief that p) would ground/give you justification for some other, "downstream" belief q. (One kind of case is where q is an obvious consequence of p, but you can also think of cases where the connection between them is not an entailment.) In such cases, my view that the mere belief in p, perhaps without justification, will also stand in some kind of normative relation to q, albeit not the relation of justifying belief in q. As discussed last week, I call this relation "hypothetically supporting," and it largely coincides with the notion Fogal calls "structural" or "attitudinal" normative pressure, and (also largely, but somewhat less perfectly) coincides with the notion Broome and others try to capture with wide-scope "ought"s.
What I'm mostly interested in isn't the relation of hypothetical support but rather the relation of hypothetical defeat, especially hypothetical undermining defeat. If your justification for (or justified belief in) p gives you justification for q, but then justification for some undermining hypothesis u (if you had it) would undermine that justification for q, then a mere belief in u will have the interesting normative relation I describe as being a hypothetical underminer. That's different than merely being a possible underminer: we're saying more than just that u is capable of undermining, in the ordinary sense, when you acquire justification for it. In order for u to hypothetically undermine, you need to have some definite doxastic attitude towards u, such as believing it. Suspending judgment in u may also have some undermining effect, though presumably not as severe as outright belief in u would have. Having a doubt that u may also have some negative effects, but these are even weaker (and more elusive).
Calling these hypothetical defeaters and underminers is just to put a label on them, and to affirm that they exhibit some interesting normative property. What is the cash value? As I said last week, I think hypothetical support and defeat have no impact on prospective (aka "propositional") justification. But they do have an effect on "doxastic" justification (aka well-founded belief). You can't have a well-founded belief in q on the basis of p so long as you have a belief in some u that hypothetically undermines the support p gives you for q. Even if you do have a justified belief in p, and no justification for your belief in u, and thus do have propositional justification for q.
I admitted that the label "hypothetical" can be misleading, and the label I use to emphasize that I'm talking about familiar, non-hypothetical justification, namely "categorical," can be even more misleading. I would welcome good substitutes. All the terminology I've seen in this space has different shortcomings.
The view I have about the contribution that hypothetical justification makes to well-founded belief means that the simple picture of well-founded belief in q as:
is inadequate. I'm not envisaging that your belief in u has to make it impossible for your belief in q to be based on your justified belief in p, which does in fact prospectively justify it. So we need a more complex story about the relation between well-founded belief and prospective justification. (I'd hope that a more satisfying story could be told than just adding a fourth condition to the simple picture, but I don't know what that more satisfying story is.)
The view I have about hypothetical justification also bears on the question from Willenken and Kolodny that I presented in the summary of Week 1 Discussion: if you have prospective justification to believe q, must there always be some epistemically permissible way to form that belief? I think perhaps not, because you may also have (unjustified) beliefs in u that get in the way. But it might not be permissible for you to refrain from believing q, either, since after all that is what your evidence does support. Well, wouldn't it then always be permissible for you to give up your unjustified belief in u, and upon doing so, then form a justified belief in q? I don't know. Just because your belief in u is unjustified, it's not obvious that you'd have any other justified attitudes upon which you could base your change of mind about u. If you're lucky, you'd come to realize that u is not supported by your evidence, and then you could base a withdrawal from u upon that realization. But I'm trying to theorize about what's the reasonable doxastic responses for imperfect epistemic agents like us, who haven't always recognized all the facts about their epistemic situations, and thus need to make doxastic choices even in advance of having such realizations. If that insight about u hasn't yet occurred to you, it's not obvious whether any change of your mind about u could be well-formed. Of course, neither is your belief in u well-formed. If you had never believed u in the first place, but suspended judgment about it all along, that could presumably have been well-formed. It would be be good to have a better understanding of how well-formedness works for suspended judgment, and what is the relation between well-founded changes of mind, and the availability of well-founded attitudes at the end of a change in mind. I'm sympathetic to the idea that in many cases like this, you'd be in a dynamic dilemma: one where no change of mind could be well-founded, but some dynamically unjustified changes of mind may end with you having well-founded attitudes afterwards. (Compare Buridan's Ass.)
Another view I have about hypothetical and categorical normative relations, that I mentioned last week, is that they can combine to generate additional hypothetical relations. So if you merely believe p (unjustifiedly), and merely have prospective justification to believe that if p then q (but don't in fact believe this), I'm prepared to think that in at least some such cases you have also have hypothetical support for q. This is different from Fogal's picture. For him, it's only if you believed that if p then q that you'd have the hypothetical/attitudinal/structural support for q.
Other interesting kinds of hypothetical defeat are higher-order hypothetical underminers: if higher-order evidence that your grounds G don't support belief in p can (categorically) undermine the justification that G (let's say in fact) gives you for p, then the mere (unjustified) higher-order belief that your grounds G don't support belief in p will hypothetically undermine belief in p. Mere (unjustified) suspended judgment about whether your grounds G support belief in p will have a similar, but weaker, effect.
In fact, we didn't discuss disagreement until week 5, but I'm inserting the summary of it here, because this is where it better fits the narrative.
The core scenario in the disagreement literature is where you have the same evidence as some other agent(s) you antecedently took to be your epistemic peer in assessing that kind of evidence, but it turns out that you and they respond to the evidence differently. Perhaps you believe p and they believe something incompatible with p, or perhaps one of you believes p and the other suspends judgment, or perhaps you form different credences towards p. One background question, that theorists disagree about, is whether there could be such cases where (before learning of their apparent peer's judgment) both sides have reasonably responded to the shared evidence. That is, can evidence ever justify alternative, incompatible doxastic responses? (Saying that it does so doesn't commit you to saying that it can justify a single agent taking all of those responses simultaneously, if that would even be possible to do.) This is the debate about Uniqueness versus Permissivism. Your answer to that debate will naturally bear on the main disagreement debate, though not in a straightforward uncontentious way.
The main disagreement debate concerns the question how the subjects should respond once they learn that their apparent peer, whom they know to share their evidence, has responded differently than they have themselves. Should the subjects "conciliate", that is, adopt some doxastic response that is in some way a concession to their apparent peer's different view? The claim that they should conciliate is not yet to say that they should both end up at the same place. Perhaps A should concede some ground to B, and B should concede some ground to A, but the right place for them to end up is not at the same single intermediate view. If there always is a single intermediate attitude that they should both have, that is what used to be called the "Equal Weight" view. The view that they should conciliate, but not that far, is what used to be called the "Some Weight" view. The view that they should not conciliate is what used to be called the "No Weight" view. (There are other views, not yet included, for example, that the agent if any who assessed the evidence properly should not conciliate, but the other agent should.) These "_ Weight" labels can be misleading, but the current terminology of "Conciliatory" versus "Steadfast" views is also unhappy, as it will obscure the difference between the Some Weight view and the other two.
Depending on what doxastic responses the agents in fact initially formed, it may not be easy to say what constitutes an "intermediate" response. It's easy if A took the evidence to support believing p, and B took it to support believing ¬p; then the intermediate response would be for them to suspend judgment in p. But what if A took the evidence to support believing p, and B took it to support suspend judgment? Or what if A took the evidence to support having a credence of .2 in p, and B took the evidence to support having a credence of .4 in p? It's not obvious that the correct intermediate response is for them to have a credence of .3 (and in fact it's known that we can't have any general averaging principle of the sort that would suggest). This is a real problem, and I don't have a general solution. I will just continue to talk of "intermediate" responses, supposing that we're talking about cases where some such notion does ultimately make sense.
We've been discussing the question whether subjects should "conciliate" once they learn that their apparent peer disagrees with them. A second option would be for a subject to "demote" their estimate of their apparent peer: maybe this other agent isn't as good a judge of the evidence as you antecedently expected them to be. In some cases, this can seem objectionably question-begging (if your only grounds for the demotion is that they formed the wrong view about the issue you're disagreeing on). In other cases, though, it seems more reasonable: for example, if your apparent peer's verdict is completely outrageous, as when you split a $95 bill and they say each person owes $160. Somehow we have to balance our judgment of how well our shared evidence supports p versus our antecedent assessment of our apparent peer's epistemic credentials. Perhaps in some cases we should conciliate (a bit) and demote our assessment of our apparent peer (a bit).
A difficult further question in this area is whether we can ever coherently and/or reasonably judge that the apparent peer is reasonable in taking the different attitude in response to the shared evidence that they do, while still retaining our own contrary response to that evidence. This is connected to the debates about Permissivism/Uniqueness, and also the debates about the possibility of Reasonable Epistemic Akrasia, which we'll be discussing at different times.
All of that is by way of general orientation. I wanted to call attention to some at-first-blush puzzling cases discussed in the disagreement literature, and then explain how bringing in the notion of hypothetical normative relations can help resolve some of the puzzlement. (The puzzling cases are cases 4 and 5 in Kelly 2010; the names of the characters are taken from Christensen 2011.) In the first case, we have two characters Right and Wrong, who assess each other as epistemic peers and know themselves to have the same evidence, but in this instance it is only Right who (initially) responds to the evidence correctly. Familiarity with the disagreement literature may dispose us to say that Right ought to conciliate to some kind of "intermediate" attitude in the direction of Wrong's, but what should we say about Wrong? If he conciliates to the same intermediate position, is he then having "the attitude he should have"? Saying No is uncomfortable, because Right and Wrong would at the end have the same attitude on the basis of the same evidence. (As brought up in class, it needn't be that all of their attitudes are the same.) But saying that after he conciliates, then Yes, Wrong does have the attitude he should have --- that is also uncomfortable. What if he had met not Right but Wronger, whose initial response diverges from the evidence even more than his own does. There's a natural temptation to say that Wrong should conciliate towards Wronger, just as he should in the other case conciliate towards Right. (Some theorists would resist this temptation, but many would not.) But it seems odd to say that Wrong started out with a wrong attitude, and then by conciliating formed an attitude even less aligned with his evidence, and now has a justified, correct attitude. (Granted, it would be a correct attitude in response to a larger body of evidence, which includes information about the peer; but still, this isn't a happy position.)
Someone in the meeting (was it Nathan?) suggested that perhaps (1) the original evidence recommends Wrong having a view in the other direction from Wronger: for example, say the evidence supports neutrality about p, but Wrong takes it to support a somewhat firm conviction in p, and Wronger takes it to support unshakable confidence that p. Then the original evidence recommends that Wrong have a lower degree of confidence than he in fact has. But (2) learning about Wronger recommends some increase in confidence, and so (3) the net result might be to have a level of confidence a bit warmer than neutrality --- though perhaps still cooler than Wrong's initial view. This verdict doesn't seem so crazy. But then (I think) Jake replied that this reasoning would also suggest that Wronger should have a level of confidence a bit cooler than neutrality, which seems very surprising. As we agreed in class, this is a good challenge, though I've thought of one reasonable reply for an advocate of the original suggestion; and there may well be more. (I won't pursue it here, these notes are already too long.)
My own suggestion for how to think of all of this is that when we think that Wrong ought to conciliate towards Wronger, or towards Right (but not all the way to the initial view Right correctly adopted to the evidence), we are expressing intuitions about the hypothetical pressures the characters are subject to. Some of the disagreement debate also concerns issues about categorical defeat, that have direct application specifically only to Right.
Can't we have justified false beliefs about our normative status? Many of us think so. But some combinations of views threaten to make this impossible. This kind of threat has been discussed many times. I think of it as "Ewing's Problem," following Christian Piller. More recently it has been discussed by Mike Titelbaum. Here is a paper by Amanda MacAskill, a grad in this department, responding to Titelbaum.
A case where we have a justified false belief about our normative status seems like it'd involve: we O1BO2X, while at the same time it was false that O2X. (We'll discuss the subscripts in a moment. I'm not insisting these be different "ought"s, but just trying to avoid assuming prematurely that they are the same "ought".) If our O1 operator obeys the modal K principle (OA ∧ O(A⊃B) ⊃ OB), and we embrace some Enkratic Principle of the form O1(BO2X ⊃ X), then we may have trouble. Because those together with O1BO2X entail O1X. Is that compatible with it being false that O2X? Not if O2 = O1.
So for there to be a puzzle here, we have to assume that the sense in which we ought1 to believe that we have a normative status, that of ought2ing to X, has to be the same as the sense in which we ought2 to X. It's most natural to think of the O1 as epistemic, so that would force the puzzle to be specifically about our beliefs about our epistemic status (that is, O2 would also have to be epistemic). But perhaps you could run a practical version of the puzzle where you O1 in a practical sense to have some belief about what you practically O2 do.
Another requirement of the puzzle is that the enkratic principle have the form O(BOX ⊃ X), rather than some more complex form like Broome's preferred O(BOX ∧ ... ⊃ intend X). I suggested in class that if X described some mental activity, such as forming an intention or a belief (as we'd have if we were discussing the possibility of justified false beliefs about your epistemic status), then Broome's main reason for adding the "intend" into this principle would not apply. But when we read Broome we'll see that he doesn't think it's straightforward to accept O(BOX ⊃ X) even in the case where X is restricted to mental activities. For present purposes, though, let us assume that there is a correct Enkratic Principle of the form this puzzle requires.
How in that case should we respond to the puzzle? Some theorists agree that justified false normative beliefs are impossible, or at least that a fully rational subject could not have false normative beliefs. You could of course stipulate that that's what you mean by "fully rational," but without such a stipulation, many of us find this hard to accept.
Another way you could respond is to deny that the relevant O satisfies the K principle (or its rule version, RM). As we discussed last week, a number of puzzles in the deontic logic literature raise challenges to those principles. As we'll see later, Broome does reject these principles for the deontic notion he expresses as "rationality requires."
But Broome also has a different basis for rejecting the terms of this puzzle. By "you ought to X" he means "all your reasons taken together on balance require you to X." And he hasn't committed himself to any Enkratic Principle with that notion taking the wide scope. He's only committed himself to Enkratic Principles of the form "rationality requires that (BOX ∧ ... ⊃ ...)", not to ones of the form "you ought to (BOX ∧ ... ⊃ ...)". So this makes it difficult to have O2 = O1, as the puzzle requires. Objection: couldn't we make both of these Os express the idea of "rationality requires"? Reply: No, because neither does Broome commit himself to any Enkratic Principle of the form "...(B(rationality requires you to X) ∧ ... ⊃ ...)." The Enkratic Principle only kicks in when you believe that all your reasons taken together require something, not when you merely believe that some one putative source of reasons, Rationality, requires it. (Thanks to Alex for help sorting this out.)
Another way to respond to the puzzle is to deny that the wide-scope normative relation in the Enkratic Principle is a strict one. Perhaps it's not outright required of you that if you believe you ought to X, you X. Perhaps you merely have some reason or pro tanto recommendation to be that way. I am sympathetic to this idea, but Broome is not.
There was no time to discuss this further in week 4 or 5, but I have started a separate page on it, and will expand the notes there when I can.