There are three sorts of counter-examples standardly offered as objections to reliabilist accounts of justification:
Brains in vats seem to be justified in believing things about the external world on the basis of their experiences. (At the very least, we'd want to say that some brains in vats have beliefs that are more reasonable and more justified than other brains' beliefs.) But brains in vats form their beliefs about the external world in a way which is very unreliable. Most of their beliefs about the external world are false. This suggests that it's not necessary, for a belief to be justified, that it be formed in a reliable way.
Imagine a rare kind of brain tumor which produces in its subject various unfounded hypochondriac beliefs, including the belief that the subject has a brain tumor. Now, the subject's belief that he has a brain tumor was formed in a very reliable way. (Whenever anyone forms the belief that he has a brain tumor as a causal result of having a brain tumor, his belief will be true.) But absent any further evidence, the subject's belief that he has a brain tumor would seem to be as unjustified as the rest of the hypochondriac beliefs the tumor causes him to have. This suggests that being formed in a reliable way does not suffice to make a belief justified.
We can imagine a subject who has reliable "hunches" at the race track;or who has a reliable faculty of clairvoyance or telepathy or something else of that sort. Thinking about such cases, many people have the intuition that unless the subject has evidence that his extra-sensory faculty is reliable, it would be irresponsible and unreasonable for him to accept any of the claims that faculty presents to him. Like the previous case, these cases also suggest that even if a belief is formed in a reliable way, that's not enough by itself to make the belief justified. (We will discuss these cases further when we look at BonJour's objections to reliabilism.)
The Generality Problem is the problem of specifying exactly which process it is whose reliability determines how justified your belief is. Any given belief you form was produced by a whole range of processes, of varying degrees of specificity. For instance, if you look out the window and form the belief that it's raining, all of the following are processes responsible for the formation of that belief:
These processes differ in how reliable they are. Which of them should we look at when we're assessing my belief that it's raining?
In "Reliability and Justified Belief" (an optional reading on reserve in Robbins Library), Richard Feldman argues that the reliabilist faces two dangers here: one danger threatens if he chooses too general a process, and the other danger threatens if he chooses too specific a process.
If the reliabilist says that the justification of my belief depends on the reliability of some very general process, like vision, then he confronts Feldman's "No Distinction" worry.
The problem here is that the set of beliefs formed on the basis of vision includes beliefs of obviously different epistemic status. For instance, my visually-based belief about the gender of a distant figure seen through a dirty window-pane is obviously less justified than my visually-based belief about the shape of a coin I scrutinize closely in good light. Any good account of justification should distinguish between these beliefs. It should not make them all come out to be equally justified. So we don't want to go along with the reliabilist and say that all beliefs formed by the process of vision are justified to the same extent.
If the reliabilist says that the justification of my belief depends on the reliability of some very specific process, like the process of forming a belief that it's raining on the basis of seeing droplets splashing on the pavement just like that while looking through a window at exactly this angle, etc., then the reliabilist confronts Feldman's "Single Case" worry.
The problem here is that if the process is extremely specific, then in all the history of the world there might have been only one belief formed by it--namely, my current belief that it's raining. Now, when we ask the question Is this process reliable? we're asking whether it tends to produce true beliefs. If the process is so specific that it has only ever produced a single belief, then whether or not it tends to produce true beliefs will just depend on whether or not this single belief is true. If the belief is true, then the process tends to produce true beliefs, and so it's reliable. If the belief is false, then the process tends to produce false beliefs, and so it's unreliable. Hence, whether or not the process is reliable seems just to depend on whether or not this single belief is true.
The reliabilist tells us that a belief is justified iff the process by which it was produced was reliable. We've just seen an argument that, since the process we're considering is so very specific, whether or not that process is reliable depends on whether or not my current belief that it's raining is true. Hence, whether or not my belief is justified depends on whether or not it's true. If my belief is true, it's justified. If my belief is false, then it's unjustified. This seems an unacceptable result. Clearly there's a difference between being justified and being true. We think that it ought to be possible for a belief to be justified but nonetheless false. So this reliabilist strategy for selecting processes doesn't seem to work, either.
The Range Problem is the problem of specifying where a process has to be reliable--in what range of possible environments?--in order for beliefs produced by it to count as justified.
So far, we've been assuming that for a subject S's belief to count as justified, it has to be produced by a process which reliably produces true beliefs in S's environment. This is why the reliabilist seems committed to saying that brains in vats can't have justified beliefs: most of the processes by which the brains form beliefs tend to produce false beliefs in their environment.
But perhaps the reliabilist can say instead that for S's belief to count as justified, it has to be produced by a process which reliably produces true beliefs in our environment, the environment we actually occupy. Then we can say that the brains in vats have justified beliefs, after all: for the processes they use to form beliefs are tend to produce true beliefs when used in our environment.
Or so we believe. However, what if it turns out that we are brains in vats? Then the processes by which we form beliefs are unreliable even in our own environment. So our beliefs wouldn't count as justified. (What's more, if we make our environment the place where a process has to be reliable, in order for the beliefs it produces to count as justified, then none of the beliefs produced by those processes will count as justified. Not even if the beliefs are formed in an environment in which the processes are reliable.) This doesn't seem a satisfactory result.
Here's another proposal: the reliabilist can say that for S's belief to count as justified, it has to be produced by a process which reliably produces true beliefs in worlds that work the way we think our world generally works. In his book Epistemology and Cognition, Goldman calls these "normal worlds." (This is not a very illuminating choice of terminology!) One of the general beliefs we have about the world is that we're not brains in vats, so the "normal worlds"will be worlds in which we're not brains in vats. (That might include our actual world, or it might not. It depends on whether we turn out to be brains in vats.) On the present proposal, beliefs formed by perception will count as justified iff they're produced by processes which reliably produce true beliefs in those "normal worlds." It's plausible that in any world which works the way we think our world generally works, perception will be reliable. Hence, any beliefs we form by perception count as justified, on this proposal--even if we turn out to be brains in vats.
Unfortunately, there are problems for this proposal, too.
Suppose some person thinks, without any good reason, that a process P is reliable. Now the reliabilist ought not to count beliefs produced by P as justified just because that's so. Hence, the "normal worlds" should not be required to be worlds in which P is reliable, just because someone somewhere believes that P is reliable. Now, the "normal worlds" are defined to be worlds where our general beliefs about the world are true. So this shows that we ought not to count the belief that P is reliable among those general beliefs, when we're determining which worlds are the "normal" ones. What exactly are our general belief, then?
More importantly, where do our general beliefs come from? Surely we ought not to take into consideration any old general belief we have about the world, when we're determining which worlds are the "normal" ones. If some of our general beliefs are unjustified fancy, then reliability in worlds where those beliefs are true ought not to have any special epistemic value. That suggests that when we're determining which worlds are the "normal worlds," we should restrict our attention to those of our general beliefs which are justified. But the reliabilist can't make that move. The reliabilist needs the notion of a "normal world" in order to define the notion of a justified belief. He's not in any position to say which beliefs are justified before we've settled the question which worlds are the "normal worlds."
Consider people in a possible world W who have some extra sixth sense that works extremely well in their world, but which doesn't work reliably in our world nor in the worlds which we count as "normal." According to the present reliabilist proposal, the beliefs that the inhabitants of W base on their sixth sense would count as unjustified. But that doesn't seem the right thing to say. If their extra sense works well in their environment, then why shouldn't the beliefs they base on it be as justified as the beliefs we base on our senses?
Perhaps the reliabilist can overcome these difficulties. Or perhaps he can abandon the notion of "normal worlds" and offer some different answer to the Range Problem. In any case, it's clear that there are no easy and straightforward answers to this problem.
Another problem for the reliabilist concerns the regulative role we think justification ought to play in our epistemic inquiries.
You don't always have control over what you believe. But sometimes you do. And you have some control over what your epistemic habits are--and this indirectly affects which beliefs you end up with.
Now we want to have true beliefs. But we can't directly ensure that all our beliefs are true. (If we already knew what the truth was, then the question of what to belief would have already been settled!) What we can directly ensure is that our beliefs are justified or reasonable. This seems to us to be a good way to get true beliefs. If we make sure our beliefs are justified, then those beliefs are likely to be true.
On this picture, then, when we're deciding what to believe, or what sorts of epistemic habits to adopt, we aim to form beliefs which are reasonable, or epistemically likely to be true. In other words, how justified a belief is (or how justified it seems to us to be) plays a certain role in guiding and regulating our epistemic activities. The recipes we follow when deciding what to believe tell us to accept those beliefs which are justified, and to reject those beliefs which are unjustified.
But can justification play this regulative or belief-guiding role if an externalist account of justification is right? It's hard to see how it could.
Suppose you take on a new job at the nuclear power plant and I instruct you to press a certain button if the temperature of the reactor core goes above a certain point. You see a dial which is labeled "Reactor Core Temperature." You ask me, "So what you mean is, I should press this button whenever the indicator on that dial goes above that line?" Now suppose I respond, "No, that's not what I mean. That dial might not be working properly. I want you to press the button whenever the reactor core is above the danger point, regardless of what that dial says." You wouldn't know how to follow my instructions. I'm asking you to regulate your activities by a guide-post which you don't have access to, when performing those activities. It doesn't seem possible to do that.
The same lesson seems to apply in the epistemic case. When you're trying to decide what to believe, facts about the reliability or causal history of your beliefs don't seem like things you'd have access to. You would already have to rely on some beliefs about the external world, before you'd be entitled to an opinion about those matters. And you're trying to decide what beliefs to rely on. It seems that, while you're doing that, you can't guide your efforts by facts about the reliability or causal history of your beliefs. You can't regulate your choice of beliefs by any external guide-posts.
This seems to show that what justifies your belief has to be "internally available," if justification is going to play the regulative role we've described.
How might an externalist respond to this criticism?