Culture & Ethics Icon Culture & Ethics

Can a Computer Think?


Alan Turing asked that question in 1950 and proposed a test to determine if a computer could think. Turing, a mathematician and a pioneer in computer science, proposed that it would someday be possible for a sufficiently advanced computer to think and to have some form of consciousness. How would we know if a computer was conscious? Turing suggested that if a computer and a human being were hidden behind a screen, and another human being were given the task of interrogating each of them, it would be reasonable to conclude that the computer was conscious if the interrogator could not distinguish the computer from the human being.
There have been many variations of the Turing test proposed, some by Turing himself, and there are annual contests based on the Turing test. Thus far, no computer has passed the Turing test (by general consensus), although some have come close.
Is the Turing test meaningful and valid? Is it possible for a computer to think?
To answer, we must first ask: what do we mean by “think”? We mean a mental act. What are the characteristics of a mental act? Several plausible characteristics have been proposed — free will, restricted access (only the thinker experiences his thoughts), incorrigibility (only the thinker knows with certainty the content of his thought), qualia (raw sensory experience), etc. But philosophers agree that one unambiguous characteristic is essential to mental acts: intentionality.
Intentionality is the other-directedness of a mental act. Intentionality is the “aboutness” of a thought. When I think about the weather, or about my boss, I’m thinking about something or someone other than my mental act itself. Things without minds don’t have primary intentionality. A rock or a tree isn’t intrinsically “about” something. A mental act can impart secondary intentionality to an object (that tree reminds me of spring), but the intentionality is imparted, not intrinsic. Only mental acts have intrinsic primary intentionality.
Do computers have intentionality?
Computers certainly have secondary intentionality imparted to them by programmers and users. But to have a mind a computer would have to have primary intentionality. How would we know if a computer had primary intentionality? A computer’s output would be intentional (in a primary sense) if the output were other-referential in a way that was not part of the program. Intentionality that was part of the program (the computer “talking about sports” because the programmer put “talk about sports” into the program) would of course merely be secondary intentionality — the intentionality of the programmer imparted to the machine. A “thinking” computer would have to talk about sports (or some other topic) that was not a part of its program. So primary intentionality would necessarily not be an algorithm of the computer. But an output by the computer would that was not part of the computer’s program wouldn’t be computation. Computation is by definition bounded by an algorithm. Mental acts are not bound by an algorithm. If a computer were to manifest acts that were not algorithmic, it would be (in that respect) no longer a computer. No amount or ingenuity of programming can enable a computer to think. Mental acts are intrinsically non-computational. A mind transcends itself and refers-to-other.
Consider an electronic calculator. When you push the buttons to multiply 3 times 8, 24 flashes on the screen. But the calculator doesn’t know anything about multiplication. It doesn’t know anything about 3, or 8, or 24. The only thing going on in the calculator is electrons hitting electrons, voltages and currents changing, etc. There is no multiplication and no understanding. There is mere computation — input transformed to output in accordance with a program. There isn’t a shred of meaning or understanding in the calculator. The circuit parameters that yielded 24 when you pressed 3 times 8 could have been written to yield your mother’s phone number when you pressed “Mom.” The circuit parameters don’t intrinsically mean anything. All of the meaning and understanding that the calculator appears to have is from you and from the human programmers who made the thing.
The classic argument that computation inherently lacks intentionality (meaning) is philosopher John Searle’s Chinese Room analogy. Searle asked us to imagine an English-only speaking person employed in a booth in China. He has at his disposal a book, written entirely in Chinese, with a list of all possible questions and corresponding answers, in Chinese. Chinese people write questions in Chinese, pass the paper to the English-speaking person in the room. The English-speaking person matches the Chinese characters in the question with Chinese characters in the corresponding answer in the book, and he copies the answers and returns them to the Chinese person who asked the question.
Of course, what the English-speaking-only person is doing is computation — an algorhithmic matching of input to output. And certainly the “program” — the book with Chinese questions and answers — was written by people (programmers) who do speak Chinese and who do understand the questions and answers. But the English-only-speaking person in the room understands none of it. He is merely doing computation, matching input to output, without any understanding.
The Chinese people asking the “computer” questions naturally think that the “computer” understands the questions and gives answers. That is, they believe that the “computer” has intentionality — “it” can convey meaning. But of course, the “computer,” the English person, has no “Chinese” intentionality. He can’t think about questions or answers. He is merely carrying out a mechanical act — matching symbols according to an algorithm. This is exactly what a computer does.
Yet the Chinese people submitting questions and receiving answers would assume that the “computer” could understand the questions and answer them. They would assume the computer had intentionality. They would be wrong.
Searle’s conclusion: computation has no intrinsic intentionality, but only secondary intentionality imparted by the programmers. Computation is not thinking. Computation is a mechanical process, and nothing more. A computer cannot have a mind, because computation is not a mental act.
I’ll make it stronger: to the extent that something is computation, it is not mental, and to the extent that it is mental, it is not computation. What a computer does is precisely the negation of thinking. Computation is an act that has no intrinsic meaning that points to anything other than itself. Without intrinsic meaning, computation cannot be a mental act. (there is one other characteristic of mental acts — qualia — which lack intrinsic meaning, but raw sensory experience is not generally thought to be part of computation).
The “Turing test” is nonsense. It is a measure of the ability of programmers to fool examiners into thinking that the purely mechanical process of computation is thought. The only thinking in a computer is the thinking imparted by the engineers and programmers who built and programmed the computer.
A computer qua computer cannot have a mind.
Computers are artifacts. They aren’t alive, and they don’t have souls. By soul I don’t mean a spooky mist that evaporates when we die. By soul I mean simply the classical meaning — the intelligible principle (the form) of a living thing. Mental acts are powers of souls, and rational mental acts (acts of the intellect) are the powers of human souls. Souls of course can carry out computation of a sort; the soul is the form of a living thing, and many of the vegetative powers of living things, such a physiological feedback loops, are akin to input-output computation. But the intellect is a power of the soul that is precisely that which is not computation. The intellect is intentional, and has meaning, which is intrinsic reference to other. Computation is defined by constraint to its algorithm. What is not programmed is not computation. Computation intrinsically lacks reference to other. Intellectual mental powers are intentional, and are the powers of the soul that are not computational.
Humans are very clever. We make astonishing artifacts — planes that defy gravity and nuclear reactions that harness the energy of the sun and computers that seem eerily human (I still get spooked when Microsoft Word suggests that I use the active voice rather than the passive voice). But they are all artifacts. The only meaning they have is the meaning that the engineers and the users have assigned to them.
A soul is not a material artifact, and the rational soul is not material at all. It cannot be “assembled.” Creation of a rational soul is creation of an entirely different order. It is a power to create from nothing. Perhaps this is the reason that many otherwise thoughtful philosophers cling to absurd materialist theories of the mind. If the mind is material, then we could create it from matter. We could create a soul.
The Turing test isn’t a test of a computer. Computers can’t take tests, because computers can’t think. The Turing test is a test of us. If a computer “passes” it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls.
It’s such irony that the first personal computer was an Apple.

Michael Egnor

Professor of Neurosurgery and Pediatrics, State University of New York, Stony Brook
Michael R. Egnor, MD, is a Professor of Neurosurgery and Pediatrics at State University of New York, Stony Brook, has served as the Director of Pediatric Neurosurgery, and is an award-winning brain surgeon. He was named one of New York’s best doctors by the New York Magazine in 2005. He received his medical education at Columbia University College of Physicians and Surgeons and completed his residency at Jackson Memorial Hospital. His research on hydrocephalus has been published in journals including Journal of Neurosurgery, Pediatrics, and Cerebrospinal Fluid Research. He is on the Scientific Advisory Board of the Hydrocephalus Association in the United States and has lectured extensively throughout the United States and Europe.

Share