The kinds of argument we'll be reading about in the last week of class are supposed to target beliefs like these: shared political or academic ideologies, moral beliefs (sometimes coming from evolutionary pressures in which case they may be human-wide, other times coming just from one's local community), religious beliefs, ...
What is the problem? We observe that if you had a different education/upbringing, you'd accept different such beliefs than you actually do. Should this realization undermine your confidence in those beliefs? (As White observes, sometimes these challenges focus on the inevitability of your arriving at these beliefs, given your situation, regardless of their truth. Other times they focus on the contingency or accidentalness of what beliefs you'd end up with.)
These arguments go under a variety of names. Sometimes they are called etiological challenges, something's "etiology" being its history. Sometimes they are called debunking arguments, because they threaten to explain our beliefs in a way that has nothing to do with their likely truth. Sometimes they are called arguments about evidentially irrelevant causes of belief.
Some philosophers argue there are no distinctive new epistemological challenges here. Just some mix of: (a) whatever epistemic impact should be had by encountering disagreeing peers, (b) challenges that one may not be living up to one's own epistemic standards, (c) general skeptical doubts prompted by our inability to independently justify our basic standards. (Elga and White are examples of such philosophers. Also Sher thinks that disagreement considerations are needed to make etiological arguments fully compelling.)
Other philosophers argue that there is a distinctive new challenge here, and thus that e.g. our moral beliefs are on epistemically worse footing than our beliefs about climate change. (Schechter argues this, and so do DiPaolo and Simpson.)
"But I Could Be Wrong," Social Philosophy and Policy 18 (2001), 64--78
Sher only discusses etiological/debunking challenges to moral beliefs.
Mill emphasizes contingent influences on our basic religious convictions; Rawls emphasizes influences on our inferences and "framework-internal" judgments.
"My awareness that I would now have different moral convictions if I had had a different upbringing or different experiences... [makes it harder] for me to act on my moral convictions when these conflict with the moral convictions of others. There is an obvious tension between my belief that my moral assessment of a situation is right while yours is wrong and my further belief that it is only an accident of fate that I assess the situation in my way rather than yours." (p. 65)
"My awareness that I might well have taken a position like yours if my history had been sufficiently different will not sit well with my belief that I have more reason to act on my moral beliefs than I have to act on yours.* Why, exactly, do these beliefs not sit well together? The answer, I think, is that my belief that I have more reason to act on my own moral beliefs than on yours appears to rest on a further belief that my own moral beliefs are somehow better --- that they are truer, more defensible, more reasonable, or something similar. However, if I believe that it is only an accident of history that I hold my own moral beliefs rather than yours, then I must also believe that which of us has the better moral beliefs is also an accident of history. This of course does not mean that my belief that my own moral beliefs are better is wrong or baseless, but it does mean that I would have that same belief even if it were wrong or baseless. However, once I realize that I would have this belief whether or not it were true, I no longer seem entitled to use it in my practical deliberations." (p. 66)
* A note says this threat isn't limited to cases of actual disagreement, but also to some cases of merely possible disagreement (cases where "I know there is (or could be?) someone who would disagree if given the chance").
In section 3, Sher argues that this is not just a standard skeptical challenge. (a) We don't actually have evidence that we are brains in vats, etc.; but we do have evidence that our beliefs were affected by our upbringing. (b) Even though we can't defend our ordinary perceptual beliefs in a non-question-begging way against the skeptic, we still take it that they are justified and that moreover, so too are our background theoretical beliefs that tell us those perceptual beliefs are in fact reliable. In the moral case, on the other hand, we don't have any justified theoretical beliefs about why our own moral beliefs (as opposed to others) should be specially reliable. (c) Skeptical doubts about perception can't translate in a serious way into action, perhaps because of our "animal nature". But skeptical doubts about morality can.
Section 4: Sher refers to the combination of your moral beliefs and your ways of assessing evidence and weighing competing values as your "moral outlook" (p. 69).
He argues: Reasoning from moral disagreement (A1) and from the contingent origins of our moral outlooks (A2) --- both of these arguments face challenges when attempted alone, and work better when combined.
What is the challenge to (A1)? We can say that the grounds for favoring our judgments aren't limited to evidence about our opponents being misinformed, stupid, biased, etc., but can also include the first-order evidence and arguments which (as we see but they don't) support that P. (So Sher's view is that in disagreement cases, the right-reasoning subject has more reason to remain steadfast.) That challenge is undermined if we supplement (A1) with considerations about contingent origins, since they cast doubt not only on our judgment that P but also on "whatever evidence or arguments I take to support it" (p. 70). (Sher seems to be assuming here that contingent origin considerations do have some of their own negative epistemic effect.)
What is the challenge to (A2)? That argument starts with the fact that my moral outlook is contingent on a particular upbringing, and moves to the conclusion that my having that outlook is independent of its justifiability or truth. However, this can be challenged if my having that upbringing is itself dependent on there being good reasons for the truth of the judgments it produces. (See note 9 on children being taught to memorize arithmetic.) That challenge is undermined if we supplement (A2) with considerations about disagreement, since "the social determinants of at least one of our moral outlooks cannot be indirectly traceable to the justifiability or truth of all of its operative elements... As soon as we disagree, I am forced to conclude that at least one of us must have been caused to acquire some operative element of his moral outlook by some aspect of his upbringing or experience that did not owe its existence to that element's truth or justifiability; and the problem, once again, is that I have no special reason to believe that that someone is you rather than me." (pp. 70--71)
Section 5: points out that, even if a person's moral outlook has a contingent/nonrational origin, it usually evolves over time in response to critical reflection.
"There is, of course, no guarantee that this hope will be realized. Despite my best efforts, it remains possible that my moral outlook has from the start been hopelessly compromised by some massive error, and that my lack of access to the source of error has systematically subverted all my ameliorative endeavors. However, this hypothesis, if backed by no positive argument, is no less speculative than is the hypothesis that all my experiences are caused by a scientist stimulating a brain in a vat..." (p. 73). In absence of special reason to think otherwise, it's reasonable for me to think my critical efforts have improved my outlook.
Sher argues: (a) This doesn't show that any two moral outlooks can be expected to converge within a human lifetime. (b) But it does make it "more palatable" that my moral outlook was inherited from my contingent origins. (c) But since our society in fact generally prizes reflection, when I actually encounter disagreement, I can't assume that I've overcome my origins more successfully than my opponents have.
Section 6: Can we acknowledge that our moral judgments are no more likely to be justified or true than our opponents', but deny that this makes it irrational for us to rely on them? Sher sketches a story about how this would go. It turns on the claim that rational action requires you to give your own beliefs and values some special priority. (The second half of the section discusses why this argumentative strategy helps save our moral judgments, in particular, as opposed to other practical judgments like ones of self-interest.)
Whatever the merits of such a story in the case of moral beliefs, its prospects seem dimmer for defending other kinds of beliefs that are targetted by etiological/debunking challenges, such as religious beliefs.
"Lucky To Be Rational" (unpublished, 2008)
Political, religious, and philosophical views "reflect the influence" of arbitrary or evidentially irrelevant factors (which we can regard as equivalent to "coin tosses").
"In each case, the subject learns that an irrelevant factor made a crucial difference to what he ends up believing...[and] to how he evaluates evidence. Should that realization reduce his confidence in his original judgment?" (pp. 1-2)
Elga's answer: it depends. When the irrelevant factor makes it likely that one isn't living up to one's own "standards" (or higher-order beliefs about what's reasonable), then yes one should reduce one's confidence. (His hypoxia case is supposed to illustrate this --- and Elga thinks mere evidence of hypoxia already has this epistemic effect, even if it's in fact false that one is suffering from hypoxia.)
But when the irrelevant factor only affects what standards one has, then no, one shouldn't reduce one's confidence --- at any rate, there's no special reason here to reduce confidence, not already generated by general skeptical pressures.
Elga admits that learning one's belief had an evidentially irrelevant cause can be "unsettling," even if one knows one is living up to one's own standards. He diagnoses this as just being due to the fact that learning about those causes makes salient general skeptical threats about the "mere possibility" of global error. Assuming there is some good answer to those general skeptical threats, no new epistemological challenge is really raised by arguments from the irrelevant causes of one's belief.
In these cases, Elga thinks it is reasonable to think "I'm lucky that the irrelevant causes ended up giving me the right/reasonable standards." Of course I know if I had ended up with other standards, I'd have thought the same thing about them --- but I should now think I would have been wrong to think that about other standards.
Why is it reasonable to take my basic epistemic standards for granted in this way? Because refusing to do so would mean I couldn't learn anything, not even that my beliefs were produced by irrelevant causes. On pp. 10--11, Elga argues that a parallel move isn't plausible in the case of the epistemology of perception. Refusing to take my sense faculties for granted is not paralyzing in the same way that refusing to take my basic epistemic standards for granted would be. It would be more feasible to actually do it.
(There's some connection here to section 6 of Sher's article, though Sher focuses on action and Elga focuses on beliefs, and not only on beliefs about morality or what one should do.)
"Luck, Rationality, and Explanation: a Reply to Elga's Lucky To Be Rational" (unpublished)
"Suppose that you have a relatively high degree of belief in some proposition. Suppose that you then come to learn that your belief was (in part) caused by an irrelevant factor, a factor that does not bear on the truth of the proposition or on your possession of evidence for it. Should you lower your degree of belief in the proposition? One might think that the answer is clearly yes. If one of your beliefs is based on an irrelevant factor, it does not solely reflect the impact of evidence. And so, the thought goes, you ought not to believe it, or at least, you ought not to believe it as strongly." (p. 1)
Schechter makes three points about Elga's discussion of this.
First (section II) he observes that if Elga is right, that gives a controversial answer to some recent debates in the philosophy of mathematics and in metaethics. (Elga would have known this.)
Second (section III) discusses Elga's claim the discovering an irrelevant cause should only cause us to reduce our confidence if it makes it likely that our reasoning violated our own standards. Schechter says there's another case worth thinking about, where we know our reasoning has been fully in accord with our standards, but those standards say we should reduce our confidence when we discover that our beliefs have irrelevant causes. Schechter thinks some of our actual standards are of this sort.
To make this case, he talks about phenomena that are "striking" or "call out for explanation." It counts against a theory if the theory says such phenomena are accidental or inexplicable. Schechter thinks that our reliability about certain subject matters would be "striking" in this way. But if we discover that our beliefs have evidentially irrelevant causes, then there's a conflict between the claims that (a) we are reliable, (b) the reliability is striking, but (c) our reliability would be an accident. Hence, there is pressure for us to reduce our confidence in one or more of these claims, including claim (a). But if we become less confident that we are reliable, then we should lower our confidence in the target claims. All this is true, Schechter says, even if we know that we have accorded with our own standards.
pp. 7-8 discusses a complication, concerning whether it's possible to have reasonable doubts about one's most fundamental standards. Schechter doesn't agree that this is impossible, but he doesn't want to rely on its being possible either. So he will only talk about less fundamental standards. (There's some connection here to the end of Elga's article.)
Third (section IV), Schechter challenges Elga's diagnosis of why learning about irrelevant causes can be "unsettling" even when we know we conformed to our own standards. (He reminds us that Elga says this "does not generate any new pressure to give up the standards. It merely makes salient a familiar sort of skeptical worry.")
Schechter describes two cases. In both cases God gave me a coherent set of standards about some subject matter (for example, math or morality). He could have given me others. In the first story, God gave me the standards he did for some reason. (Perhaps because he's "epistemologically benevolent," or perhaps for some other reason.) In the second story, God chose what standards to give me randomly.
Schechter thinks it's "a lot less unsettling to be in the first scenario." (Hmm... Even if we acknowledge that God's reason might not have been benevolent?) Schechter says "This is so despite the fact that in both scenarios, I am aware that there are multiple coherent standards of belief. Moreover, in neither scenario do I possess any independent reason to believe my standards are correct. Elga's diagnosis cannot explain this difference in attitude." (pp. 9-10)
Schechter thinks the first scenario is better because it's one where we have an explanation of the fact that (as we assume) we have reliable standards. Even if that explanation isn't "independent" of our standards. (As acknowledged, we don't have an "independent" defense of our standards in either case.)
"The pressure is not to refute a skeptic or to find some fully independent reason to accept one of our fundamental beliefs. Rather, the pressure is the much more mundane pressure to possess explanations of striking phenomena that are good by our own lights." (p. 10)
(Compare here point (b) in my summary of section 3 of Sher's article, which makes a similar point.)
"Indoctrination Anxiety and the Etiology of Belief," Synthese forthcoming
DiPaolo and Simpson (D&S) review etiological/debunking challenges. They say many authors portray the problem as one of general doxastic contingency: "the fact that it is, in some fundamental sense, an accident that any of us come to hold the beliefs we actually hold." D&S on the other hand, think there is a special worry when we get reason to think our beliefs don't have just any cause, but when they are specifically the result of indoctrination.
They won't try to rigorously define indoctrination, but they give some rough criteria for it on pp. 8--10. The kind of teaching methods we use when we get youngsters to memorize arithmetic seem to count, but doing that to youngsters is unobjectionable, because we're not bypassing or inhibiting their critical faculties. (They don't yet have developed critical faculties, and what we're doing doesn't inhibit their development.)
In sections 2--3, D&S discuss whether etiological/debunking challenges only serve as "indirect pointers" to epistemological worries we were already familiar with, such as (a) whatever epistemic impact should be had by encountering disagreeing peers, or (b) challenges that one may not be living up to one's own epistemic standards. Later they will distinguish them from (c) general skeptical doubts prompted by our inability to independently justify our basic standards.
D&S concede that etiological challenges do tend to point to worries of kind (a) and (b); but when the irrelevant causes in question are a kind of indoctrination, they think there is also a special new worry. This is because we have reason to believe that in our world, indoctrinated beliefs are usually false. (That is, they're "anti-reliable," pp. 11ff; see also note 30.) Thinking of etiological challenges in this way has the advantage (the "third" point on p. 7) that it encourages subjects to review and revise their beliefs in ways that they otherwise probably wouldn't bother to.
D&S will argue that their view is distinct from general skeptical arguments (p. 12--13), and from other criticisms of "dogmatic thinking" (section 5). Here again they emphasize (p. 14) that thinking of one's own belief-system as "not merely contingent or accidental, but rather as a product of social forces in the service of a political agenda," is likely to be more effective at "dislodging" epistemologically negative effects.
In section 6, D&S close by discussing connections between these epistemological questions and some political issues.
"You Just Believe That Because...," Philosophical Perspectives 24 (2010), 573--615
This is the long complex article I talked about in class. It's not required, but only optional reading.