This first premise will come up many times in our discussion. Let's give it a name:
"Certainty" can mean different things. To say that you're certain that p might mean that you're especially confident, that you have no lingering doubts about P running through your mind. Call this the psychological sense of "certainty." Alternatively, to say that you're certain that p might mean that you have really good evidence for p, evidence which is so good that there's no chance of your being wrong. It's not possible to believe that p on the basis of that kind of evidence and be mistaken. Call this the evidential sense of "certainty."
Unger intends to be using the psychological sense of "certainty" in his argument. He does this just to keep the argument simpler. He says that the epistemic sense of "certainty" is a normative notion--it has to do with how good your evidence is, and so with how confident you should be that P, not with how confident you actually are. Talk about shoulds is controversial, and Unger wants to keep his discussion as straightforward as he can. He does, though, think both:
Why does Unger think that we are, and should be, certain of hardly anything?
The answer is that he thinks that "being certain" is an absolute term, like "empty" and "flat." He thinks that emptiness requires a thing to have nothing in it whatsoever--however small. And he thinks that, in order to be flat, a thing must have no bumps or curves whatsoever--however small. If "certain" were an absolute term, too, then being certain would require having no doubts whatsoever.
Unger argues that if "flat" is an absolute term, then:
Necessarily, if x is flatter (or more near to being flat) than y, then that must mean that x has fewer bumps or curves than y, so y must have some bumps or curves; so strictly speaking, y is not really flat.Similarly, if "being certain" is an absolute term, then:
Necessarily, if you are (or should be) more certain of p than you are of q, then that must mean that you have (or should have) fewer doubts about p than about q, so you must have (or should have) some doubts about q; so strictly speaking, you're not really certain of q.Unger is of course willing to allow that y might be close enough to being flat for all practical purposes. Likewise, you might be close enough to certain of q for all practical purposes. But there's a big difference between what's strictly speaking true and what's it's acceptable to say or what's near enough to the truth for practical purposes. Here we're just concerned with what's strictly speaking true.
Unger thinks that for most propositions q, the proposition that you exist is, and should be, more certain for you than q. Hence, if he's right that "being certain" is an absolute term, then--since there is something which is more certain for you than q--it follows that, strictly speaking, you're not certain of q. And if knowledge requires absolute certainty, then you can't know that q.
Do you think it's true that knowledge requires psychological certainty? What if you believe that P, but you have some doubts running through your mind--doubts you recognize to be irrational and baseless. Would that prevent you from knowing P? This is not clear to me.
A related notion is the notion of defeasibility: the evidence you have for believing that p is defeasible just in case it can be overturned or defeated as more evidence comes in. An example of indefeasible evidence might be a mathematical proof. Most other kinds of evidence are defeasible. For example, we have plenty of evidence that Mars is not made of coffee. But one can imagine a sequence of discoveries that would turn the tables, and make it reasonable to think that perhaps Mars is made of coffee, after all. I'm not saying we're going to get that evidence. It's extremely unlikely that that will happen. But it's still possible. So our evidence that Mars is not made of coffee is defeasible. It could be defeated or overturned by more evidence.
As we've seen, some people think that knowledge requires absolute certainty. These people will say that you can never know that p if your evidence for p is less than fully certain. If there's any chance that your evidence might later be defeated, then it won't be good enough to give you knowledge that p.
The Lottery Argument seems to confirm this claim that defeasible evidence can never be good enough for knowledge. It seems to show that no matter how good your evidence is, so long as it leaves open some possibility of your being wrong, you won't know. You may be very highly justified in believing that your ticket will lose, but you don't know it.
Other people think that it is sometimes possible to know things on the basis of defeasible evidence. These people are called fallibilists.
They think it can be enough if your evidence is pretty damn good, but not so good as to make you infallible. For instance, you have pretty good evidence that Mars is not made of coffee. You might be wrong. Your evidence is defeasible. But suppose you're not wrong. Your evidence is pretty damn good. The fallibilist will say that in this kind of situation you can count as knowing.
Sometimes people think that the debate about skepticism is just a debate about whether or not the Certainty Principle is true. But it's not that simple. If you accept the Certainty Principle, then it does looks like skepticism will follow, at least about a great many topics. Maybe there are some things that you're infallible about (e.g., whether 1+1=2, or whether you're thinking about monkeys). But not many. So the Certainty Principle does seem to support skepticism.
It's the reverse direction that's trickier.
Let's distinguish three kinds of epistemically desirable state:
And in particular, couldn't it still at least be reasonable for us to act on the assumption that P is true, even if we don't have what the skeptic calls "knowledge that P"? What would be really interesting--and really troubling--is if the skeptic had arguments that threatened our possession of such less-demanding states, too. Arguments that don't just fuss about our not having absolutely certain evidence.
The best, most interesting kinds of skeptical argument are of that sort. Some of them threaten to show that we can't even have justified beliefs about the world outside our heads. If they're right, then it's no more reasonable to believe that you're sitting down right now than it is to believe you're a brain in a vat.
This shows that we shouldn't think that the debate about skepticism is just a debate about whether the Certainty Principle is true. Even if we concede that we can't be certain of much about the outside world, there remain weaker--but still epistemically desirable--positions for us to aspire to. As we'll see during the course, some of them might arguably also deserve the name "knowledge." But as I said, who cares what we call them? On the other hand, even if we decided that certainty isn't a requirement for knowledge, we might not yet be in the clear. The most powerful skeptical arguments seem not to need anything as strong as the Certainty Principle. They purport to raise difficulties about our possessing even the weaker epistemic positions.