Reading David Chalmers on the Coming "Singularity"

David_Chalmers,_delivering_a_talk_at_De_La_Salle_University-Manila,_March_27,_2012.jpg

New York University philosopher David Chalmers is well known for critiquing the scientific materialist’s vision of explaining, someday and somehow, the phenomenon of conscious experience in purely material terms. Chalmers has argued (now famously) that a philosophical thought experiment is fatal to materialist science: “zombies” are conceivable, and hence logically possible. Here he means by “zombie” a human whose outward behavior perfectly mimics people having consciousness, yet in fact the “lights are not on” for the zombie; they have no consciousness.

In simple terms:

If there is a possible world which is just like this one except that it contains zombies, then that seems to imply that the existence of consciousness is a further, nonphysical fact about our world. To put it metaphorically, even after determining the physical facts about our world, God had to “do more work” to ensure that we weren’t zombies.

MInd-and-Technology3.jpgThe Australian-born Chalmers made the zombie argument precise, using an abstruse formal field known as modal logic. The implication is that consciousness is a separate fact apart from any conceivable set of material facts, and so consciousness can’t be reduced.

His position on the historic mind-body problem is known as property dualism — the dualism in this case being the material body and the conscious mind. Property dualism is distinct from substance dualism, typically traced to Descartes, though it goes back into the history of thought much further. With property dualism, the conscious mind need not be causally efficacious in producing action or behavior. With property dualism, in other words, the “further fact” about conscious experience may turn out to be epiphenomenal, or in other words causally inert.

Chalmers’s argument thus managed to engage (if only partially) both traditional dualists in the tradition of Descartes, and scientists willing to admit a continuing mystery about the conscious mind, even if being resistant to his dualist conclusions. His critique meant that materialism was too simple.

I studied with David Chalmers at the University of Arizona in the late 1990s. I reviewed his book The Conscious Mind: In Search of a Fundamental Theory. I’ve always found his position on consciousness compelling; at the very least I’ve never been convinced by any of the scores of criticisms and attacks it’s inspired over the years (typically by materialistic-minded neuroscientists).

And so I’ve been more than a little surprised that of late, Chalmers has taken to speculating about the philosophical implications of a coming Singularity. His 2010 paper “The Singularity: A Philosophical Analysis” is a bold attempt to grant contentious assumptions to Singularitarians and “superintelligence” believers that Strong AI is possible — and coming relatively soon.

His argument, in a nutshell, is that we’ll eventually have general machine intelligence, and that this licenses the belief that we’ll have greater than human intelligence (because a smart enough AI will engineer better copies of itself), and that we’ll then have superintelligence. From here, human nature ceases to be the most important intelligent feature of the universe, and we get a point of no return in the affairs of man and machine: the Singularity.

Chalmers, prudentially, puts the date of this fantastic state of affairs at around one hundred years from now (prudentially, because he and the rest of all us won’t be around for embarrassing follow-ups).

I think his first premise, that we’ll eventually have general artificial intelligence (AGI), or more specifically, slightly greater than AGI to kick off the intelligence explosion toward the Singularity, is debatable. Without it, no Singularity. But what’s the principled argument for that premise?

Let me start with what I feel is obvious: I don’t think that speculating that Strong AI (or so-named “AGI”) will be here “in a hundred years or so” represents much of a philosophical position worth arguing over. Philosophy that starts with a premise aimed at predicting the future world we’ll be living in, generations from now, seems pointless. Imagine if someone of the stripes of, say, Paul Churchland would do the same for the problem of consciousness: “Sure, these objections seem compelling, but give neuroscience a hundred years and they won’t.” Who would accept that?

Philosophy is more properly pointed at conceptual issues we currently can characterize; it moves forward not by helping itself to “future progress,” a notion that’s immediately suspect for the simple reason that speculating about the future of technology is almost invariably wrong, often in ways that make the prior predictions seem positively silly.

As Popper noted, predicting technological innovation is impossible — if we could predict the innovation, we’d already understand how to build it, and so there’s no logical space between the prediction and the actual technology. Whenever we get such a prediction right, it’s either sheer luck (no intervening innovations happened to change our course), or it’s essentially already possible to make it happen (no true novelty). The AI discussion fits this perfectly. So on what basis is the hundred-years prediction made? Chalmers himself concedes that it’s predicated on pragmatic considerations, i.e., maintaining an interest in readers. That’s Sci-Fi, not philosophy or reasoned debate.

Even worse than the folly of prediction is that ignoring principled objections does a great disservice to philosophy — it hurts computer science, too. Possible “Defeaters” as Chalmers mention them are (shall we say) exogenous. Who cares about considerations like politics or a nuclear war? This is like Kierkegaard’s famous retort that he’ll marry such and such, unless when he leaves the house a tile falls from the roof and strikes him dead.

But what about philosophical analysis? Why is it obvious that AI will succeed, eventually? What’s the argument? Since Chalmers doesn’t develop this part of his paper much, I’ll take as his likely rejoinder that AI has made progress towards a strong version of AI already, and so we can extrapolate. Yet a different interpretation of the facts is that it has made progress only towards problems that admit of computational representation and processing, ones that importantly presuppose a particular simplistic set of epistemological conditions.

This admittedly cursory mention of a principled objection explains the facts quite well: for one, why progress on non-topic-constrained dialogue (conversation) has been almost nil. The Loebner Prize tests conversational abilities of Turing machines; check out the latest Loebner Prize winner, and compare it if you will to ELIZA, the 1960s Rogerian therapist simulation that was all the rage in the early days of AI. Where’s the progress?

Kurzweil-like arguments that we’re making progress are undercut by such cases — the only cases that matter! Who cares about chess, voice recognition, face recognition, and other data-driven successes that show increases in F-measure performance? Most of the problem (it turns out) can be brute-forced by provision of more training data (even the differences in training algorithms don’t matter much compared to hardware and data). General intelligence doesn’t appear to be so reducible (as evidenced by all of our attempts to do so), so the cases are irrelevant to the current discussion. This point can be made as forcefully and as compelling as one cares to take the time to write it out.

AI has always been too ready to dismiss philosophical objections. I worked on supervised machine learning approaches to natural language processing for over a decade (SVMs primarily, i.e., classification techniques, but I also worked on logistic regression models and a host of graphical models like HMMs, and CRFs — it depends what problem you’re trying to solve, what algorithm you end up using). And it’s mostly common knowledge that “learning” approaches or, i.e., induction, don’t carry us very far toward general intelligence — conversational ability, say, or empowering a robot to move around on a busy street corner.

Representing past experience is utterly inadequate; something like “abduction” is required, clearly, yet abduction is not formalizable as far as we know. At any rate there is no blueprint for converting dataset analysis (induction) into true abduction. There’s no blue print, and no one who is honest has the faintest clue what such a conversion would even look like. It’s simply a mystery.

Hence the “hundred years prediction” Chalmers makes is philosophically troubling because it ignores the treachery of technological prediction, and also because it ignores known facts about AI that currently have no solution, and a fortiori researchers have no clue how to go about finding one. The case that general intelligence is not a computable function is quite strong.

The key here is the continuing mystery hovering around what was once called the Frame Problem — a difficulty Daniel Dennett himself has conceded recently is as alive and well today as it was yesterday — yet Singularity enthusiasts apparently presuppose that some vague notion of technological progress sweeps all this under the carpet. But how? Surely just helping oneself to “progress” can’t prop the argument.

The matter sticks in my craw, specifically because rigorous discussion about the possible inherent limitations of Turing machines to replicate (I/O-replicate, that is) general human intelligence — say, passing the Turing Test — are treated as dispensable without bothering to address the very real challenges the field continues to face, and the very real possibility that there are principled reasons not to be so sanguine about predicting an eventual success.

Finally, the idea of simulating intelligence ex post facto as it were, versus generating it afresh, is relevant to the debate. I don’t see much of that in the performance area of AI (i.e., generate interesting and dynamically changing conversation rather than simulate one already envisioned).

A very reasonable position is that something other than Turing machines undergirds our ability to generate natural language in social situations like everyday conversation. A corollary of this reasonable position is that the Singularity is full of hot air. My guess is that in a while all of this debate will go away — right around the time AI finds itself in another self-inflicted winter, brought on by indulging in excessive hype.

Image: David Chalmers, by James Flux [GFDL or CC BY-SA 3.0], via Wikimedia Commons.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

__k-reviewContinuing SeriesMind and TechnologyPhilosophySciencetechnologyViews