Here is some help understanding the logical notation that Lewis uses in his paper.
We said that realizer functionalism can be thought of as a form of identity theory. But Lewis’s argument for his view is different than Smart’s argument for identity theory. Recall that Smart appealed to scientific evidence and also to considerations of simplicity, to show that sensations are identical to brain processes. For Smart, the identity claim is a reasonable hypothesis or posit. Lewis gives a different argument for identity theory, one which does not appeal to considerations of simplicity.
Lewis’s argument goes as follows:
This argument is valid, and we can suppose that our neurophysiological evidence supports some claim like premise 2. So everything turns on premise 1.
Lewis claims that premise 1 is implicit in our understanding of the concept pain, and hence that we can know premise 1 to be true a priori. So Lewis is a common-sense functionalist, not a scientific functionalist. Note that premise 1 identifies pain with the realizer state for causal role R. So only someone who holds a common-sense form of realizer functionalism, like Lewis, will grant that premise 1 is true and knowable a priori.
If we agree with Lewis about the status of 1, then 3 will not merely be a reasonable hypothesis. It will be a logical consequence of our concept of pain, together with what we’ve learned from science.
Some common-sense functionalists agree with Lewis that there is some a priori connection between our concept of pain and causal role R. But they are role functionalists instead; and in their view, what we can know a priori about pain is not premise 1, but rather:
1*. Pain = what is had in common by all the people who have some state or other occupying causal role R.
This premise identifies pain with the role state for causal role R. If we replace premise 1 in Lewis’s argument with premise 1*, then the argument is no longer valid. (Pain wouldn’t be identical to neural state N, but only realized by that state in us.) So Lewis has to tell us why the a priori connection between our concept of pain and causal role R takes the form specified in premise 1, rather than the form specified in premise 1*. He has to explain why we should believe that it’s premise 1 which is true and knowable a priori, rather than premise 1*.
The reason why Lewis prefers to identify pain with a realizer state, rather than a role state, is that, in Lewis’s view, only realizer states have any causal efficacy. (Lewis argues for this elsewhere, not in the selection we read.) He thinks role states are unable to do any causal work. An analogy: when you strike a glass and it breaks, does the glass’s fragility play any role in causing the glass to break? Or is all the causal work done at the microphysical level? In Lewis’s view, only the microphysical states that realize the glass’s fragility do any causal work. The role state of fragility is causally inert. Similarly, Lewis believes that when a person is in pain, the role state associated with the pain is also causally inert. All the causal work is done by the person’s being in the neurophysiological states that realize the pain role.
When you burn your hand on a hot stove, it does seem intuitively correct to say that the pain in your hand causes you to pull your hand back. Indeed, the functionalist trumpeted it as one of the advances of her view over behaviorism that, on her view, mental states are real inner causes of our behavior. So if a theory of the mind says that certain mental states are causally inefficacious, that is a mark against that theory. If Lewis’s views about mental causation are correct, then identifying pain with the neurophysiological states that realize the pain role is the only way to make pain be causally efficacious. This would be a big advantage for Lewis’s form of identity theory.
Lewis’s views about mental causation are very controversial, however. Other philosophers argue that both the neurophysiological states that realize pain and the role state for pain are causally efficacious. They just produce the same effect.
That may mean the effect is “overdetermined” in some sense, but these philosophers argue that this is not the problematic kind of overdetermination that substance dualists would be committed to, when they say mental states have physical effects.
We said already that there are going to be challenges to whether role functionalism can really capture the identity theorist’s picture that mental states are inner states that cause our behavior. It’s not clear that the complex roles or dispositions that these functionalists identify our mental states with can really claim to be doing the causal work. We’ll hear more about this in coming weeks.
Remember when we were talking about behaviorism, we said that it’s not clear mental states always lead to the same physically-described behavior. The functionalist has more resources to draw on than the behaviorist, because they can say behavior is generated by large combinations of mental states. But they might also have trouble identifying physically-described behavior that these combinations always produce. Especially if this is supposed to be known by everybody who understands the mental concept in question. It seems more likely that what we know when we understand a mental concept is how it’s related to mentally-loaded behavior. So the functionalist would be disappointed, if they wanted to define or explain all our mental concepts in non-mental terms. (Kim discusses these issues at pp. 152-4 and 156.)
If two creatures are running radically different programs, perhaps their psychologies could end up being so different that there are no mental states they share. But if they’re running programs with only very slight differences — as you and I might be, or as we and whales might be — then you’d think it’d be possible for them to have some of their mental states in common. Maybe not their total mental state, but perhaps they could have the same feelings of hunger, or frustration, or beliefs about the weather.
A difficulty with this is that neither of the functionalist strategies we looked at for defining mental states (in terms of machine tables, or in terms of the Ramsey/Lewis method), lets us make sense of the idea of systems that are running different programs having states in common. States are only defined in terms of their role inside a program. State ZERO in my Coke machine is defined in terms of the entire machine table. If your Coke machine doesn’t implement exactly the same table, then nothing will count as its being in my machine’s state ZERO. We have no idea would be for state X in program A to be the same as state Y in program B. (Machine tables don’t let us even talk about partial mental states like hunger and so on, but this issue also comes up with the Ramsey/Lewis method.)
Perhaps the functionalist can come up with a story about this, but it will take some work and won’t be straightforward. (Kim discuss this issue at pp. 151-2 and 154-5.)
Some philosophers object that for some (perhaps all) of our mental states, their causal roles don’t seem essential to them. Kim discusses whether we could swap the causal roles of our feelings of pain and tickle; or how different colors look to us (pp. 179-182).
If that really is coherently imaginable, then these causal roles can’t be what defines our mental states; and if it really is possible, then these causal roles can’t be part of our mental states’ nature either.
One response for a functionalist might be to say that for aspects of our mental states that we can attach names to and talk about, functionalism is correct. But there may be additional aspects of our minds that it doesn’t capture. That’s what we imagine swapping in these thought-experiments.
We’ll discuss these ideas later in the course.