One awkward aspect of the way we’ve been thinking about functionalism so far is that our formal automata are only in a single state at a time. For our minds, that would correspond to your total psychological state at a given moment. Any change in what you experience or are thinking about means you’re in a different total psychological state.
Mental states like pains and beliefs about Charlotte aren’t like that. They can stick around even though other mental states come and go. So those are only partial psychological states. My total state may consist in my being in these three partial states… and not being in these other five… Our formal automata don’t yet give us a model for thinking about partial states. (See Kim pp. 147-148.)
An easy way to improve this is to use the Ramsey/Lewis method for defining theoretical terms. Kim introduces this method in his Chapter 6.
Let’s say we want to explain what the different parts of a car are. Suppose we have a theory that says how the different parts of a car interact with each other, and with things our audience already understands, like air and gasoline. The theory might look something like this:
Car Theory: …and the carburetor mixes gasoline and air and sends the mixture to the ignition chamber, which in turn… and that makes the wheels turn.
The purple expressions are names for parts of the car, with which our audience may not be familiar. The italicized expressions are names for things and phenomena we’ll suppose our audience already understands.
Now, given this Car Theory, how might we go about explaining to people what a carburetor and an ignition chamber and the rest are? We can’t just define a carburetor as something that interacts with the ignition chamber in such-and-such ways, because our audience doesn’t yet know what an ignition chamber is.
What we can do is the following. First, we replace all of the purple terms by pink variables:
…and xc mixes gasoline and air and sends the mixture to xi, which in turn… and that makes the wheels turn.
Next we embed that boxed expression inside a larger frame, like this:
There are an xc and xi where ….
Understand the … to be filled in with the boxed expression above.
This is called the Ramsey Sentence for our Car Theory (after the philosopher/mathematician/economist Frank Ramsey).
Next we can define what it is to be a carburetor and an ignition chamber as follows:
A carburetor = an xc such that there is an xi where … and xc mixes gasoline and air and sends the mixture to xi, which in turn… and that makes the wheels turn.
An ignition chamber = an xi such that there is an xc where … and xc mixes gasoline and air and sends the mixture to xi, which in turn… and that makes the wheels turn.
In this way, we explain what a carburetor is, in terms of how it interacts with ignition chambers and with other things, without presupposing that our audience already knows what an ignition chamber is. In the same way, we explain what an ignition chamber is, in terms of how it interacts with carburetors and with other things, without presupposing that our audience already knows what a carburetor is.
In addition, we’ve explained what a carburetor and an ignition chamber are in terms of the causal roles they play, as specified in our Car Theory. Any pair of things which play the appropriate causal roles count as a carburetor and an ignition chamber. The details of their physical construction are not important. In other words, carburetors are multiply realizable. To be a carburetor, it doesn’t matter what you’re made out of; only that you do the right job. (The same goes for ignition chambers.)
So this method of defining terms gives us two benefits that functionalists value. It lets us:
define things like carburetors in terms of how they interact with other things, like ignition chambers, without presupposing that the notion of an ignition chamber is already understood
define carburetors in such a way that they can be realized by different physical mechanisms
How might we apply this method of defining terms to the mind?
Suppose we have some theory where our mental concepts should be defined by their role in it. Kim’s (very simplified) example is:
Whenever agent A suffers tissue damage and is normally alert, A is in pain; and whenever A is awake, A tends to be normally alert; and whenever A is in pain, A winces and groans and goes into a state of distress; and when A is not normally alert or A is in distress, A tends to make more typing errors.
The purple expressions are the three mental concepts we think this theory should define. They are the concepts that we have to explain to our audience. As functionalists, we want these concepts to be defined by the patterns in how they causally interact with each other and inputs and outputs that this theory describes. The italicized expressions are some input and output conditions: various sorts of sensory stimulation, and behavioral output, which we’ll suppose our audience already understands.
Where does the theory that the functionalist uses to define our mental states come from? There are different views one can have here:
The analytic/common-sense functionalist says that the theory is an a priori theory, made up of platitudes about our mental states that everyone who has the concepts of pain, belief, and so on, tacitly knows, or at least, is in a position to recognize as true.
The scientific/psycho-functionalist says that the theory is an a posteriori theory, which we only learn as a result of scientific investigation of how our minds work (the sort of investigation they do in cognitive science labs). (This is also sometimes called empirical functionalism.)
Note that not every causal fact about a mental state has to enter into the functionalist’s definition of pain. Different carburetors can have causal properties which play no role in making them carburetors: my carburetor might shimmy a bit, while yours whistles. These facts are irrelevant to their being carburetors. In just the same way, our mental states might have some causal properties which play no role in making them the mental states they are.
It is a very difficult matter to know which of a mental state’s causal properties ought to enter into the definition of that mental state, and which are merely accidental.
But let’s assume we’ve got the theory.
We begin by replacing the purple terms in our theory by pink variables:
Whenever agent A suffers tissue damage and is Ma, A is in Mp; and whenever A is awake, A tends to be Ma; and whenever A is in Mp, A winces and groans and goes into a state of Md; and when A is not Ma or A is in Md, A tends to make more typing errors.
Next we embed that boxed expression inside a larger frame, either like this:
being normally alert = being such that there are some states Ma Mp and Md where the agent is in Ma and …
being in pain = being such that there are some states Ma Mp and Md where the agent is in Mp and …
being in distress = being such that there are some states Ma Mp and Md where the agent is in Md and …
Fill in the … with the boxed expression above.
The functionalist thinks that all of our mental states can be defined in this way. Anything which has states which play those causal roles counts as having a mind, and whenever it’s in the first of those states, it’s normally alert; and when it’s in the second of those states, it’s in pain; and so on. It does not matter what the intrinsic make-up of those states is. In humans, they are certain kinds of brain states. In Martians, they would likely be different sorts of states. In an appropriately-programmed computer, they would be electronic states. These would be different physical realizations of the same causal roles. The functionalist identifies our mental states using their causal roles. How those roles are realized is not important.
Alternatively we could embed the boxed expression like this:
being in pain = being in the state Mp such that there are some states Ma and Md where …
Kim does it the first way. That’s what you’d say if you wanted to be a role functionalist about alertness, pain, and distress. The second strategy is what you’d use if you wanted to be a realizer functionalist.
Either way, note that the Ramsey/Lewis method for defining terms doesn’t restrict the states we’re dealing with to be total states. A system can be in pain and distress at the same time.