Transhumanist Claims Aside, Enhancing Human Intelligence Isn’t on the Horizon

The_Chess_Game_-_Sofonisba_Anguissola.jpg

Following on what I wrote here Friday, “A Transhumanist Asks, ‘Why Not Be Superheroes?’,” I should explain more about the point I made that while intelligence enhancement is what gets everyone excited and riled about Transhumanism, it’s the least likely thing that, for better or worse, technology can actually bestow.

MInd-and-Technology3.jpgIn a recent (and lousy) book I’m reading, The Artificial Intelligence Revolution: Will Artificial Intelligence Serve Us Or Replace Us?, author Louis A. Del Monte makes the claim that AI is developing implants that will enhance “memory, learning speed, and overall intelligence.” Really? If we help ourselves to that much rhetoric, it’s no wonder the debate has gone off the rails!

First of all, human memory is not computer storage. In fact, people should stop calling computer storage “memory,” since that just confuses everybody. Michael Egnor has written about this at ENV: any simplistic equation of memory with “access” to “knowledge” is just drawing on buzz words in place of science (yet doing so in the name of science), and fails to make contact with actual biological memory.

The salient discussion is about differences, between human and computer. Once we ignore these differences in favor of the storage metaphor, we’ve reduced the problem and are wasting our time. The implication here is obvious: we can’t enhance human memory if we don’t understand how it works, and a fortiori we can’t enhance human memory with storage if it’s not storage. This is a major point to grasp. The research here, like much of the Transhumanist agenda, is mostly hype.

Second, “learning speed” is a useless term. Again, the computational metaphors are infecting a much deeper scientific discussion. Learning what? Human learning is epistemologically complicated. We don’t just learn simple patterns but huge, complex, interconnected sets of ideas that help us make sense of the world.

Hence, like “storage” for “memory,” “machine learning” for actual learning simplifies too much. Think of it this way: how would we speed up learning in a healthy human? How do we parametrize that problem? What’s the target? Learning about the recent history of Western civilization, say, by reading Paul Johnson’s Modern Times?

Yet if a computer can’t interpret natural language, which it can’t, how can such learning occur? So we have to ask, prior to “speeding up” some unknown quantity of “learning,” what exactly do we mean when we say someone “learns”? It’s not merely induction. In fact, there is no computational framework for epistemologically complex learning at all — if we could do that, we could solve the frame problem.

In fact most anything that we care about regarding our minds and their action in the world has little to do with computation. The ultimate reductio in the trio mentioned in The Artificial Intelligence Revolution is “overall intelligence.” That, surely, is the largest placeholder for an unknown one can put into print with a straight (or even crooked) face.

“Overall intelligence” enhancement presupposes that we have “hooks” into overall intelligence that allow us to reduce it to some computational representation. Then presumably we can swap in a faster CPU, say.

But the reduction to computation, if it were accurate, would enable us to solve the Turing test, because surely overall intelligence, if that phrase means anything at all in the human case, must involve understanding natural language. But computers can’t do that.

Running this backward, we’re left with not even the foggiest notion of how to enhance overall intelligence. So the entire project surreptitiously (or not) simply assumes the blithe confidence of AI enthusiasts like Kurzweil and his Transhumanist followers. We’re off and running, speeding up learning, bumping up our paltry memories (which we don’t even understand yet), and enhancing our overall intelligence.

It’s all rubbish. We have to start with basic, clear-headed research in neuroscience and cognate fields. What we’ll find — what we already know — is that everywhere there’s evidence of a mind, there’s some buried mystery that doesn’t fit well into discussions of microprocessors.

That’s why I don’t think we’ll be enhancing intelligence anytime soon. Ironically, the Transhumanists are making the most smoke here but casting the least light.

Image: The Chess Game, by Sofonisba Anguissola [Public domain], via Wikimedia Commons.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Continuing SeriesMind and TechnologySciencetechnologyViews