We’re discussing thought experiments from:
We also looked at another Hofstadter dialogue illustrating some of the ideas in these debates:
Here’s an optional second selection from Searle discussing the same issues:
One thing we reviewed was the notion of an intentional state (see these notes from earlier in the term).
We also discussed what it means to have states that non-derivatively have content or are about things — where symbols in a newspaper exemplify having content only derivatively (because its authors and readers associate those symbols with meanings). Humans and probably some animals too, on the other hand, have states that are non-derivatively about things. True, we’ll say things like “The newspaper says that Nixon resigned.” But the newspaper doesn’t itself really talk or have opinions. (I mean the printed newspaper, not the company that publishes it.) It gets its intentional properties from its authors and readers; from their beliefs and intentions and expectations. The newspaper only has what we can call borrowed or derived intentionality.
Similarly, we sometimes ascribe intentional states to computers and other man-made devices. For instance, you might say that your chess-playing computer wants to castle king-side. But your chess-playing computer probably doesn’t have any non-derivative, unborrowed intentionality. It is too simple a device. Such intentionality as it has it gets from the beliefs and intentions of its programmers. (Or perhaps we’re just “reading” intentionality into the program, in the way we read emotions into baby dolls and stuffed animals.)
The question whether a being has states that are non-derivatively about things is different from the question of whether the being freely chose to have those states. Even if we manipulated Andrew into having certain beliefs, or genetically engineered him so as to ensure that he has those beliefs, still if Andrew ends up with an ordinary human brain (just one with an unusual history), surely he’ll have beliefs and other attitudes with their own content. We may have “put those thoughts into his head”; but the sense in which we did that is different from the sense in which we put the meanings into the symbols in the newspaper. Andrew really does have his own thoughts, with real content, in a way that the newspaper doesn’t. This is separate from the question of how much choice Andrew had, or how much free will he exercised, in acquiring those thoughts.
Indeed, it’s not even clear how much free will ordinary humans exercise in acquiring the thoughts they have. Some philosophers think we don’t have any free will at all; when they say this, they don’t think they’re saying we lack beliefs, or that our minds have no more contentful thoughts in them than newspapers do.
Unfortunately, there’s a way of formulating the question whether a being has states with non-derivative content that can obscure all this. Philosophers sometimes phrase this as the question whether the being has original intentionality. When it’s put that way, you may look at the manipulated humans and think, obviously their thoughts aren’t “original,” someone else put them there. Now consider a sophisticated AI that humans programmed, much more flexible and seemingly “intelligent” than your chess-playing computer. You may want to say in this case too: Obviously someone else chose how to program the AI. (Or at least, chose how to program the programs that controlled how the AI evolved.) So its thoughts aren’t “original,” either.
But these responses would be too fast. The question is not supposed to be about how much free choice these beings had in choosing their thoughts, or how much of a role other people had in choosing them. It’s supposed instead to be a question of whether they have more contentful thoughts than newspapers do. Surely humans do, even if they were manipulated into having those thoughts. And perhaps sophisticated AIs will too. Or perhaps they won’t. But the mere fact that humans played some role in programming them doesn’t straightforwardly prove that they don’t.
So that’s one thing I emphasized in our discussion: even if AIs are programmed by humans, that doesn’t yet answer the question whether they’re more like manipulated people, or whether they’re more like newspapers.
Instead of “original intentionality,” another label that’s sometimes used here is “intrinsic intentionality.” But that can be misleading too. It makes it sound like what we’re talking about here is incompatible with the externalist views of content we’ll see Putnam defending in a few weeks. But it’s not supposed to be incompatible with that.
The functionalist (what Searle calls “the advocate of strong AI”) believes that if we have a computer running a sophisticated enough program, then the computer will have its own, “original” intentional states. This is the view Searle wants to argue against.
Functionalism says any way of implementing the right program gives you everything that’s needed for there to be a mind, real beliefs, understanding, intelligence, and so on. Well, here’s one way that the program could be implemented:
Jack does not understand any Chinese. However, he inhabits a room which contains a book with detailed instructions about how to manipulate Chinese symbols. He does not know what the symbols mean, but he can distinguish them by their shape. If you pass a series of Chinese symbols into the room, Jack will manipulate them according to the instructions in the book, writing down some notes on a scratchpad, and eventually will pass back a different set of Chinese symbols. This results in what appears to be an intelligible conversation in Chinese. (In fact, we can suppose that “the room” containing Jack and the book of instructions passes a Turing Test for understanding Chinese.)
According to Searle, Jack does not understand Chinese, even though he is manipulating symbols according to the rules in the book. So manipulating symbols according to those rules is not enough, by itself, to enable one to understand Chinese. It would not be enough, by itself, to enable any and every system implementing those rules to understand Chinese. Some extra ingredient would be needed. And there’s nothing special about the mental state of “understanding” here. Searle would say that implementing the Chinese room software does not, by itself, suffice to give a system any intentional states: no genuine beliefs, or preferences/desires, or intentions, or hopes or fears, or anything. It does not matter how detailed and sophisticated that software is. Searle writes:
Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man’s ability to understand Chinese. (1980, p. 368)
Note the difference between Searle’s use of this example, and Block’s use of the Homunculi-head and China-brain examples. Searle denies that the Chinese room has any of its own thoughts or intentional states (other than Jack’s intentional states). Block, on the other hand, might grant that the Homunculi-head and China-brain may have some intentional states (especially if we consider psychofunctional versions of those systems, which Block goes on to discuss later in his article). Block’s main doubt is whether there is “anything it’s like” to be one of those systems, that is, whether those systems have any qualitative states like pain or perceptual experience.
We asked how the functionalist might respond to Searle’s Chinese Room argument. One functionalist response says that even though the person inside the room may not have the relevant mental states (understanding Chinese, having beliefs about the Han dynasty, liking the taste of shrimp), the whole system does have these states.
According to this response, Jack does not himself implement the Chinese room software. He is only part of the machinery. The system as a whole — which includes Jack, the book of instructions, Jack’s scratchpad, and so on — is what implements the Chinese room software. The functionalist is only committed to saying that this system as a whole understands Chinese. It is compatible with this that Jack does not understand Chinese.
One objection Searle has made to these arguments (in the 1980 selection we read, not in the optional 1983 one) says: let’s suppose the guy in the room memorizes the whole book and scratchpad and does everything in his head. He doesn’t then have to have any kind of “Aha! Now I understand Chinese” experience. From his perspective, Chinese can seem as opaque as it always did. At the same time, he would still be running the program; and now he would be the whole system. So how can the system understand Chinese, if he is the whole system, but he still doesn’t understand Chinese?
Here is Searle presenting this objection:
My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way that the system could understand because the system is just a part of him. (1980, p. 359)
You guys started thinking through ways the functionalist could respond to Searle’s argument. We’ll continue developing these next class.
One issue that sometimes comes up in class discussion of these issues is what people in the scenario Searle is describing would be able to know. I think we can bracket those issues, and settle what to think about this debate without needing to settle them. Here is what I hope will be a helpful way to explain this:
Say I ask you to consider a hypothetical story, where Beatrice dislikes Allen but doesn’t have any plan to hurt him. But one night they’re arguing while walking across a bridge, and he grabs for her cell phone, but she pushes him away hard. He stumbles backward, and then falls off the bridge and dies. Beatrice isn’t sorry that he’s dead, but she didn’t mean for this to happen, and she’s afraid she’ll get in trouble. Luckily no one else was around to see what happened, and Allen’s body washes away in the river. Now I want us to discuss whether what happened counts as Beatrice having murdered Allen. We start to discuss that question. One of us argues that it’s not murder; the other argues that it is murder but that Beatrice shouldn’t go to jail for it. What if I now object to your arguments and proposals: “How do you know what happened? You weren’t there, and there’s no evidence left that Beatrice did it.”
Wouldn’t that objection seem off-base? I just told you the story about what happened. Sure, no one inside the story may know what happened. (Maybe even Beatrice doesn’t know anymore, because she later lost her memory.) But we outsiders can think about the story as stipulated, and decide whether if that’s what happened, it counts as murder. (It could be that the story isn’t complete enough to give a definite answer. Maybe it depends on whether Beatrice did this or thought that at such-and-such a point in the story, and that’s something I left unsettled. Then we’d have to split the story and talk about what’s true in either case.)
One kind of question can be asked about What people inside the story would be able to know. A different kind of question can be asked about If the story is as described (and it doesn’t covertly contradict itself), what would be true: was Allen murdered or not? As I described things, I was inviting you to engage with the second kind of question, not the first. That’s why my complaint about there being no evidence left is off-base. In the same way, Searle is inviting us to engage with the second kind of question about his thought-experiment.