It’s Not Easy Being a Materialist

SDSC_1.jpg
P.Z. Myers and I have been discussing this question for a while: is the brain sufficient for the mind? It’s clearly necessary for the mind, in everyday experience. Strokes and ethanol affect the brain and alter the mind. But necessity is not sufficiency. Is the brain alone — just matter — entirely sufficient for the mind? I think the mind needs an immaterial cause, like the soul. Myers doesn’t.
How, from a scientific standpoint, could we resolve our disagreement? We would have to show, empirically, whether matter alone could, under the right circumstances, give rise to a mind. This is an experimental question, and it turns on the ability to create artificial intelligence (A.I.). If we could build machines that have first-person ontogeny, which is self-awareness, we could show conclusively that matter alone is sufficient to cause the mind. A conscious computer would have a mind that emerged from matter, and Myers would be vindicated. If we can’t create A.I., my viewpoint would seem more credible.
How would we know that a computer had a conscious mind?


Alan Turing, in 1950, suggested a test for consciousness in a machine. In the Turing test, an investigator would interact with a person and a machine, but would be blinded as to which was which. If the investigator couldn’t tell which one was the person, and which was the machine, it is reasonable to conclude that the machine had a mind like the person. It would be reasonable to conclude that the machine was conscious.
Advocates of A.I. are passionate about their science. Transhumanist Ray Kurzweil is probably the most prominent proponent of the view that A.I. is possible, and even inevitable. He has written extensively on the scientific, philosophical, and cultural implications of A.I. His three most recent books are The Age of Spiritual Machines: When Computers Exceed Human Intelligence (1999), Fantastic Voyage: Live Long Enough to Live Forever (2004), The Singularity Is Near: When Humans Transcend Biology (2005). A.I. is, for many, a scientific eschatology.
Yet things have not gone well for A.I. After a half-century of remarkable advances in computer technology, no computer has passed the Turing test. No computer has, by general consensus, a mind. Not even close.
Many scientists and philosophers suggest that A.I. is not even theoretically possible. John Searle, a leading philosopher of the mind, has proposed a (now famous) thought experiment called The Chinese Room. Here’s my version:
Imagine that P.Z. Myers went to China and got a job. His job is this: he sits in a room, and Chinese people pass questions, written on paper in Chinese, through a slot into the room. Myers, of course, doesn’t speak Chinese. Not a word. But he has a huge book, written entirely in Chinese, that contains every conceivable question, in Chinese, and a corresponding answer to each question, in Chinese. P.Z. just matches the characters in the submitted questions to the answers in the book, and passes the answers back through the slot.
In a very real sense, Myers would be just like a computer. He’s the processor, the Chinese book is the program, and questions and answers are the input and the output. And he’d pass the Turing test. A Chinese person outside of the room would conclude that Myers understood the questions, because he always gave appropriate answers. But Myers understands nothing of the questions or the answers. They’re in Chinese. Myers (the processor) merely had syntax, but he didn’t have semantics. He didn’t know the meaning of what he was doing. There’s no reason to think that syntax (a computer program) can give rise to semantics (meaning), and yet insight into meaning is a prerequisite for consciousness. The Chinese Room analogy is a serious problem for the view that A.I. is possible.
But imagine that artificial intelligence could be created, and Searle is wrong. Imagine that teams of the best computer scientists, working day and night for decades, finally produced a computer that had an awareness of itself. A conscious computer, with a mind! So, finally, P.Z. Myers and I could agree on something. Myers would be right. If a computer had a mind, we could infer two things:
1) Matter is sufficient, as well as necessary, for the mind. The mind is an emergent property of matter.
2) The emergence of mind from matter requires intelligent design.
It’s not easy being a materialist.

Michael Egnor

Senior Fellow, Center for Natural & Artificial Intelligence
Michael R. Egnor, MD, is a Professor of Neurosurgery and Pediatrics at State University of New York, Stony Brook, has served as the Director of Pediatric Neurosurgery, and award-winning brain surgeon. He was named one of New York’s best doctors by the New York Magazine in 2005. He received his medical education at Columbia University College of Physicians and Surgeons and completed his residency at Jackson Memorial Hospital. His research on hydrocephalus has been published in journals including Journal of Neurosurgery, Pediatrics, and Cerebrospinal Fluid Research. He is on the Scientific Advisory Board of the Hydrocephalus Association in the United States and has lectured extensively throughout the United States and Europe.

Share