Irony Alert: Stephen Hawking Once Again Predicts AI Could Spell Doom for Humanity

Hawking Swiftkey.jpg

Stephen Hawking’s news-making prophecies about Artificial Intelligence, a form of technology of which he is a notable beneficiary, are almost entirely devoid of scientific substance. Via Fox News:

The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.

"The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC.

MInd-and-Technology3.jpgThis latest warning came on the heels of an interviewer’s question about his voice synthesis system, developed by Intel and a smaller company called Swiftkey. Swiftkey is known mainly for keyboard software for smartphones. The British company uses AI — in the sense of statistical models of character and word sequences — to help users avoid typos and to complete commonly used phrases on smartphones.

"Auto-complete" features on Android and Apple smartphones are old hat to most of us; Swiftkey further customizes smartphone experience by allowing users to streamline typing — eliminating "tapping" in favor of a typing "flow" that fits user preferences on mobile keyboards — as well as offering short-cuts to include emoticons and other commonly used short-hands to communicate on space-limited devices.

Hawking, who has Lou Gehrig’s disease (ALS), uses a Swiftkey-inspired system to help him produce synthetic speech more quickly. Hawking’s system uses tiny sensors on his cheek to convert twitches into spoken or typed words. His distinctive synthesized speech has become part of his trademark. Thanks to technology, the debilitating effects of ALS, formerly thought a death sentence, have not sidelined Hawking as it once would have done.

Producing text of speech guided by human intent, however, is not particularly groundbreaking work in AI. Most of us are already well aware of the manifest limitations to statistical models predicting what we "want" to say based on what we commonly say. The models are most likely to be wrong precisely when we’re making a new, perhaps important point. Swiftkey’s system of shortcuts and "Did you mean this?" predictions based on prior word sequences is no different. Helpful, yes. And very helpful indeed if you suffer from ALS. But evidence of a coming age of smart robots? Please.

It’s difficult to comment sometimes on these types of stories, because the emotional appeal and the narrative about human culture and technology far overshadow the actual scientific merit. You almost feel like a techno-Scrooge.

But science, in the end, is about getting at the truth. And the truth is, there’s no scientific, evidence-based path from simple statistical models like those used by Swiftkey to true, human-like AI. If there were, all the Big Data in the world and Moore’s Law would have ferretted it out by now. A major, unpredictable, scientific innovation (one is tempted to say "revolution") seems necessary. Perhaps Hawking is being coy, but given his starting point — the gee-whiz auto-complete of his Swiftkey application — his prognostications about the age of smart robots are fanciful, to put it mildly.

Few reporters would pay much attention if a famous AI scientist — say, a Marvin Minsky at MIT, or a Peter Norvig (now at Google) — suddenly began warning us all about a coming age of cold fusion, or "quantum gravity" based on a documentary he’d seen on the BBC, or what have you. Yet a famous astrophysicist like Hawking can weigh in on the future of AI, and we’re all ears.

It is interesting, I’ll admit. But it has nothing to do with science, and nothing to do with AI, in the end, either.

Image credit: Swiftkey.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Continuing SeriesMind and TechnologyNewsSciencetechnologyWorld