There are several problems with Searle’s objection to the proposal that the whole System understands Chinese, even if Jack doesn’t.
In the first place, Searle’s claim “he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him” is a dubious form of inference. This is not a valid inference:
He doesn’t weigh five pounds, and a fortiori neither does his heart, because there isn’t anything in his heart that isn’t in him.
Nor is this:
Jack wasn’t designed by Chinese Rooms by Google™, and a fortiori neither was his Chinese room system, because there isn’t anything in the system that isn’t in him.
So why should the inference work any better when we’re talking about whether the system understands Chinese?
A second, and related, problem is Searle’s focus on the spatial location of the Chinese room system. This misdirects attention from the important facts about the relationship between Jack and the Chinese room system. Let me explain.
I invited you to think about programs like an Android emulator that runs on a Mac:
Some computing systems run software that enables them to emulate other operating systems, and software written for those other operating systems. For instance, you can get software that lets your Macintosh emulate an Android. Suppose you do this. Now consider two groups of software running on your Macintosh: (i) the combination of the Macintosh OS and all the programs it’s currently running (including the emulator program), and (ii) the combination of the Android OS and the activities of some program it’s currently running. We can note some important facts about the relationship between these two pieces of software:
The Android software is in some sense included or incorporated in that Mac software. It is causally subordinate or dependent on the Mac software. The activities of the Mac software bring it about that the Android software gets implemented.
Nonetheless, the “incorporated” software can be in certain states without the “outer” software thereby being in those states, too. For example: the Android software may crash and become unresponsive, while the Mac software (including the emulator) keeps running. It’s just that the emulator’s window would display a crashed Android program. Another example: the Android software might be treating YouTube as the frontmost, active program; but — if you don’t have the emulator software active in your Mac — the Mac software could be treating Chrome as its frontmost, active program.
It’s this notion of one piece of software incorporating another piece of software which is important in thinking about the relation between Jack and the Chinese room software. According to the functionalist, when Jack memorizes all the instructions in the Chinese book, he becomes like the Mac software, and the Chinese room software becomes like the emulated Android software. Jack fully incorporates the Chinese room software. That does not mean that Jack shares all the states of the Chinese room software, nor that it shares all of his states. If the Chinese room software crashes, Jack may keep going fine. If the Chinese room software is in a state of believing that China was at its cultural peak during the Han dynasty, that does not mean that Jack is also in that state. And so on. In particular, for the Chinese room software to understand some Chinese symbol, it is not required that Jack also understand that symbol.
The fact that when Jack has “internalized” the Chinese room software, it is then spatially internal to Jack, is irrelevant. This just means that the Chinese room software and Jack’s software are being run on the same hardware (Jack’s brain). It does not mean that any states of the one are thereby states of the other.
In the functionalist’s view, what goes on when Jack “internalizes” the Chinese room software is this. Jack’s body then houses two distinct intelligent systems — similar to people with multiple personalities. The Chinese room system is intelligent. Jack implements its thinking (like the Mac emulation software implements the activities of some Android software). But Jack does not thereby think the Chinese room system’s thoughts, nor need Jack even be aware of those thoughts. Neither of the intelligent systems in Jack’s body is able to directly communicate with the other (by “reading each other’s mind,” or anything like that). And the Chinese room system has the peculiar feature that its continued existence, and the execution of its intentions, depends on the other system’s activities and work schedule.
This would be an odd set-up, were it to occur. (Imagine Jack trying to carry on a discussion with the Chinese room software, with the help of a friend who does the translation!) But it’s not conceptually incoherent.
Searle is not a dualist. He believes that thinking is an entirely physical process. He’s just arguing that the mere manipulation of formal symbols cannot by itself suffice to produce any genuine thought (that is, any non-derivative, unborrowed, “original” intentionality). Whether thinking takes place importantly depends on what sort of hardware is doing the symbol manipulation.
Some sorts of hardware, like human brains, clearly are of the right sort to produce thought. Searle thinks that other sorts of hardware, like the Chinese room, or beer cans tied together with string and powered by windmills, clearly are not of the right sort to produce thought — no matter how sophisticated the software they implement.
Are silicon-based computers made of the right kind of stuff to have thoughts? Perhaps, perhaps not. In Searle’s view, we do not know the answer to this. Maybe we will never know. Searle just insists that, if silicon-based computers are capable of thought, this will be in part due to special causal powers possessed by silicon chips. It will not merely be because they are implementing certain pieces of software. For any software the silicon-based computers implement can also be implemented by the Chinese room, which Searle says has no intentional states of its own (other than Jack’s intentional states).
Searle writes:
It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something with those causal powers could have that intentionality. Perhaps other physical and chemical processes could produce exactly these effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That is an empirical question, rather like the question whether photosynthesis can be done by something with a chemistry different from that of chlorophyll.
But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. (1980, p. 516)