When re-reading Turing's article this morning, I noticed this passage: > It is not possible to produce a set of rules purporting to describe what a > man should do in every conceivable set of circumstances... > To attempt to provide rules of conduct to cover every eventuality, even those > arising from traffic lights, appears to be impossible. With all this I agree. > From this it is argued that we cannot be machines. I shall try to reproduce the > argument, but I fear I shall hardly do it justice. It seems to run something > like this, "If each man had a definite set of rules of conduct by which he > regulated his life he would be no better than a machine. But there are no such > rules, so men cannot be machines." The undistributed middle is glaring. Now what the heck does that mean? I can't expect you to know. I hope when you come across passages like this you will at least be able to work out from context what the author must in general be getting at. I hope it was clear that Turing doesn't approve of the argument he's reporting here, and that the passages that come next in his article---where he distinguishes between "rules of conduct" and "laws of behavior"---are meant to be part of a reply to the argument. Some of you may have been industrious enough to google the term "undistributed middle" to try to figure out more specifically what Turing was saying. (If so, great. That disposition will serve you well.) What you will find is that this is a term from an older logical system. We don't use the expression so much anymore---in fact I myself had to look up specifically which fallacy this is. An example of the fallacy of undistributed middle would be the argument "All newts are gross. Harry is gross. So Harry is a newt." I hope that even without the benefit of any formal training in logic, you'll be able to see that this is not a good form of argument. (Of course there can be *instances* of this form whose premises and conclusion are all true, but that doesn't make this a good *form* of argument.) Now I have to scratch my head and speculate a bit to figure out why Turing thought the argument he was discussing displayed this form. I don't think it's fair for him to say that the presence of this fallacy in the argument he reports is "glaring." Here's my best guess at what Turing is thinking. We begin with the claim: 1. If you had a definite set of rules of conduct by which you regulated your life, you would be a machine. In general, claims of the form "If D, then M" are equivalent to claims of the form "If not-M, then not-D." (Compare: if Fido is a dog, then Fido is mortal. Equivalent to: if Fido is immortal, then Fido is not a dog.) So 1 is equivalent to: 2. If you are not a machine (or as Turing puts it, if you are "better than" a machine), then you don't have a definite set of rules of conduct by which you regulate your life. Now Turing is imagining that his opponents continue the argument like this: 3. I don't have a definite set of rules of conduct by which I regulate my life. 4. Therefore, I am not (or: I am "better than") a machine. The argument from 2 and 3 to 4 does display the fallacy of undistributed middle that we described above. Turing's text doesn't make this as explicit as he might have, though, since he writes the beginning premise in form 1 rather than the (equivalent) form 2, and he doesn't explicitly state premise 3, but leaves it implicit. Turing is imagining that even if some computers will have definite rules of conduct governing every situation, some will not. The point of the passages that come next in his article are to distinguish between the idea of having such "rules of conduct" and there being "laws of behavior" that settle in advance how the machine (or the human being) will respond to any given stimulus. Turing would agree that there are laws of behavior strictly governing what the computer does, but there may be such laws for us too. He'd agree that we don't have rules of conduct telling us what to do in every situation, but he'd say computers won't necessarily have that either. Computers and we might both have to *figure out* what to do, rather than follow some recipe already written out in advance. I *think* I understand the distinction he's trying to make, but I'm not entirely sure that I do. How about you? Can you make sense of the idea that there may be some laws of behavior (say your genes, and everything that's happened to you up until this point in your life) that govern how you will act, even though you don't have rules of conduct by which you regulate your life? What more would you say to better explain this distinction? Can you make sense of the idea that some computer might *also* lack such rules of conduct by which it regulates its life? There's a lot here for us to wrestle with later. Hopefully though this will help you better track how the words Turing actually wrote here are supposed to fit into his larger argument.