I, Robot — You "Pet Labrador"? More Wisdom from Elon Musk

AIBO_ERS-7_following_pink_ball_held_by_child.jpg

The Daily Mail reports:

Robots will use humans as pets once they achieve a subset of artificial intelligence known as ‘superintelligence’.

This is according to SpaceX-founder Elon Musk who claims that when computers become smarter than people, they will treat them like ‘pet Labradors’.

His comments were made in a recent interview with scientist Neil deGrasse Tyson, who added that computers could choose to breed docile humans and eradicate the violent ones.

What’s interesting about Musk’s worry about a world run by superintelligent machines is how little he’s actually doing about it. I mean, taking him at his word. Sure he donates to wacky projects like the Future of Life Institute, but billionaires in Silicon Valley are expected to do that. It’s like Hollywood stars getting enraged about human rights issues in Africa, or beating the drum about global warming.

MInd-and-Technology3.jpgIf Musk really thinks that in the supposed coming age, smart machines “may conclude that all unhappy humans should be terminated,” and they’ll “get rid of the slow humans” and treat us like pets and so on, he’s halfway to the Unabomber’s mission, if he’s serious. I’m not suggesting that Elon Musk shows any sign of moving to a cabin in Montana and becoming murderous over his fears of techno-dystopia, but when I rub my eyes and read his actual words, the logical consequences of his view seem to demand a fairly radical response.

Futurist Ray Kurzweil, for instance, the ringleader in this circus, puts true AI at 2029, and the singularity at 2040. For those of us with young children, that means the world we’re sending them into will contain superintelligent and quite possibly malevolent machines. On a scale of parental concerns, McDonald’s for lunch is nothing compared to getting exterminated by the cool logic of a future computer — and one that Daddy helped to build!

The Unabomber reference isn’t incidental. Bill Joy, former Chief Scientist of Sun Microsystems and one of the original worry warts about the rise of smart machines, reportedly bumped into Ray Kurzweil at a George Gilder Telecosm conference in Lake Tahoe in the Fall of 1998. Kurzweil’s book The Age of Spiritual Machines had just come out, and over a drink at the bar, he apparently put the fear of — What? — the fear of Superintelligence into Joy, who later penned one of the most famous apocalyptic rants ever, published (where else?) in Wired in 2000 (“Why the future doesn’t need us“). Seemingly everyone read it, Joy was the talk of the town, offers for book deals piled up, and the future in spite of not needing us turned out to be exceedingly bright for Joy’s retirement. That is, until he packed up and left for a remote part of Colorado, living in a log cabin.

Exit Bill Joy (he probably likes to ski, anyway). My point here is that Joy quotes from the Unabomber manifesto about the dangers and evils of technology, and the threat to humanity from increasingly intelligent machines. Kevin Kelly, co-founder of Wired, also quotes the Unabomber (aka Ted Kaczynski) in his 2010 What Technology Wants, a lengthy discussion of how technology is getting smarter and smarter, but in a feel-good evolutionary kind of way. Kelly’s not the fear-mongering type, though he espouses many of the same confusions.

Back in the present, the lunatics are running the asylum. This is after a brief hiatus between roughly 2001 and 2007 when a researcher was reluctant even to mention “Artificial Intelligence” out of fear of embarrassment. Seriously — in my days with a company funded by DARPA, we always described our technology as “machine learning” or “information extraction,” since any reference to AI was tantamount to declaring that it didn’t work. Concerns about global warming were then at their height. All the fear got funneled into Florida disappearing in the Atlantic because we couldn’t stop driving SUVs.

But AI suddenly came back into the spotlight, certainly by 2009, and by 2015 the seemingly unthinkable has happened — people not living in log cabins are starting to sound an awful lot like the old cranks. Musk, Bill Gates? Hawking?

But the trend here is to shoot off a ten million dollar check to an institute working on “friendly AI,” like the Machine Intelligence Research Institute (MIRI) in Berkeley, where computer science folks with a philosophical bent and a love of science fiction can get reasonable salaries. That isn’t such a bad idea, maybe, but the actions don’t really match the staggering seriousness of the concerns, taken at face value.

Kelly, after distancing himself as he should from the Unabomber’s actions, nonetheless had the courage to point out that if one really believed in the apocalyptic vision of our future, then drastic actions must follow, morally and logically. Throwing dollars at feel-good think tanks is not enough. What’s needed is to stop the robots, or the Dr. Frankensteins hard at work building them. But this view is the reductio ad absurdum of the whole game, of course. We have real things to worry about today. Elon Musk’s concern that our computers will turn us into pet Labradors is hardly one of them.

Let’s state the obvious: We are terrible at predicting the future. If you listened to futurists of not long ago, by now we’d have nuclear-powered toasters, and ballistic missiles would deliver our mail overseas. Yet when it comes to prognosticating, we can’t resist. Our brains themselves are wired to perpetually predict the next sequence in the stream of events known as life.

If we could, by Presidential fiat maybe or a shame campaign on the Internet, make predicting the future past reasonable limits seem like alchemy (and it is), the superintelligence balloon would pop, and rather abruptly.

All the hot hair from Musk and others is based on the faulty premise that we’ve got a handle on what the world will look like, “if we don’t do something,” in a decade or more from now. Some long-range predictions we have to make — like, say, those pertaining to a nuclear Iran — but many have the speculative and self-stimulating feel of nuclear toasters. The difference between geopolitical “What ifs” based on sound evidence, and speculation about superintelligence, should be obvious.

I went home recently to visit my parents, who are retired and would presumably have lots of time to fret about Superintelligence. I asked my mother if she worried about computers becoming too smart. She smiled and made a silly face (she knows her son works on this type of thing — whatever it is). Finally she said, “No, not at all. But I do worry that people spend too much time on the Internet. They should get out and take a walk in the fresh air, too.” I couldn’t have said it better myself.

Image by Stuart Caie (Flickr: ??????????????) [CC BY 2.0], via Wikimedia Commons.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

__k-reviewMind and TechnologySciencetechnologyViews