Neuroscience & Mind Icon Neuroscience & Mind

Watson’s Goof — What We Should Really Fear from AI

Watson_Jeopardy.jpg

It was a telling response, revealing the idiot inside the savant. But the game went on. As the final round ended, Ken Jennings, one of the champions IBM’s Watson machine bested, jokingly welcomed “our new computer overlords.” But will a new supreme intelligence arise from the likes of Watson, overtaking and perhaps even subjecting humans? Or, perhaps, in fearing the one thing, have we missed something more important?

IBM created Watson for one purpose: Win at Jeopardy. They conquered chess in an earlier “Grand Challenge” when Gary Kasparov lost in 1997. Jeopardy is not chess: Questions are answers and answers are questions. The categories loosely limit answers. And, especially when playing champions, speed with the buzzer matters almost more than anything else. Watson had to respond quickly.

Ken Jennings and Brad Rutter, in February 2011, joined the Jeopardy team at the Thomas J. Watson Research Center in Yorktown Heights, NY, to meet Watson’s challenge. As the match began, Alex Trebek, his hair a salt-and-pepper weave after 25 years of hosting Jeopardy, stood in characteristic position with his back to the glowing blue game board of TVs. Ken and Brad patiently waited behind their podiums. Watson, it could be said, stood as well: A flat-screen TV, hung vertically just above the center podium, swirled with a glowing, multi-color animation mimicking IBM’s “smarter planet” logo. Alex read the categories to begin.

Over the three days, Watson performed much like a human expert, answering correctly about 90 percent of the time. But it was Watson’s mistakes that were most revealing. For the Final Jeopardy question on the second day, Watson suggested that Toronto, Canada, was a U.S. City. On the final day, Watson similarly stumbled by suggesting “Chemise” (a type of dress) for a clue from the “Also on Your Computer Keys” category. When Watson was right, it was right. But when Watson was wrong, it was not even in the ballpark. It is these small, but thoroughly inhuman, mistakes that display the real concerns we should have about Artificial Intelligence (AI).

I’ve suggested previously that AI computers work by following a map. That analogy, if applied woodenly, is wrong, but conceptually it remains correct: Engineers create and carefully tune existing AI systems to do well in one specific area or domain. Watson played Jeopardy. AlphaGo, from Google’s DeepMind AI company, plays Go. Other projects can find a cat in a photograph or fraud in a list of transactions. Each appears to be an idiot savant: Skilled beyond us at one thing and lacking even a child’s ability on anything else. Why is that? Is it a limit of AI? Or, as some worry, will a true generally aware machine arise from these beginnings?

AI systems largely fall into two broad categories: those relying mostly on statistics and those relying mostly on models with meaning. (Neither approach is pure.) AI research oscillates between these. The first AI machines were computationally weaker than a cheap, modern cell phone. Statistical AI, however, requires massive computing power. So those first machines relied on models, computerized versions of what people thought might be taking place inside our heads. Early fears and hype aside, the lack of results led to AI’s dark years, decades when studying AI was not cool. While researchers, notably Paul Allen’s AI research center in Seattle, continue to chip away at those hard problems, nearly all the recent successes in AI, including Watson and AlphaGo, have come because enough cheap computing power exists to enable the statistical methods. DeepMind tuned AlphaGo, which mostly relies on statistically driven Neural Networks, with millions of games using thousands of computers. The original Watson required so much computing power that Jeopardy had to come to New York because Watson could not come to Los Angeles. The same advances enabling AlphaGo have since reduced Watson to the size of a “pizza box.”

Idiot Savants, such as Watson or AlphaGo, cannot handle the real world. AlphaGo is especially limited, even if the enabling engineering is useful in other areas. IBM is reusing Watson to provide answers to medical questions, for example. These machines recognize that which has been, but fail with the unexpected. (Lee Sedol defeated AlphaGo in one game by playing in ways the machine did not expect.) They will not evolve into a self-aware machine; they are not the first step along some progressive ladder toward general intelligence. IBM’s researchers call Watson an artificially intelligent computer, a “thinking” machine. Trouble lurks within that definition and in how Watson works. No one agrees on what intelligence means, even less what it means to be intelligent. When asked how our brains work, Dr. John Medina, Professor of Bioengineering at the University of Washington’s School of Medicine, has responded directly “We have no idea…most of it is spooky.” Artificially intelligent computing machines are just machines — machines someone could assemble from light-switches, paper, and wire if they had enough time, patience, and space. But, unlike others, these machines are quiet. No gears whir. No levers clank. No switches snap. Their silence and speed, as if they were minds made from silicon and metal, create an aura of intelligence.

Is there nothing then to fear? If there is no impending computer overlord, should we be concerned?

On May 6, 2010, the Dow Jones Industrial average fell nearly 1,000 points, losing $1 trillion, in roughly twenty minutes. This last April, the Federal Bureau of Investigation finally arrested the man, Navinder Singh Sarao, they claim to be responsible for the crash. What had Sarao done? He flooded the market, electronically, with orders that triggered the high-frequency trading computers — essentially, AI-like machines used by nearly all financial firms — leading to the huge drop in prices, from which he then rapidly profited. The machines, much like when Watson proclaimed Toronto, Canada, to be a U.S. city, had wandered into territory uncharted by their algorithms and wildly failed. Thankfully, failsafes kicked in, humans recognized something funny going on, and the market recovered. The complete underlying causes of the 2010 crash include a toxic mix of human greed, computers, and complex regulations. Regardless, the crash could not have occurred without the speed and algorithms of the high-frequency trading machines.

The Flash Crash displayed characteristic machine behavior: speed. We use computers because they do that which we ask many times faster than we can do ourselves. We hope that the instructions we implanted, that slice of our own mind, is sufficient to get the results we’re after. But we can never know. Software complexity is such that, except for the smallest, most meaningless programs, we can never know, definitively, whether it will work or not. By the time we’ve tested the software enough to know, we’ll have spent more time and money, much more, than if we had just solved the problem ourselves. Actually, the problem is worse than this: Some software, including recent advances in AI, can never be proven correct no matter how much time or money we spend.

These two challenges — that computers will go wrong, wildly and quickly, and that we can never know definitively when or how they’ll go wrong — suggest that we need proper controls to their use. It matters little, really, if Watson confuses a city or two while playing Jeopardy. It matters a lot if a machine mishandles our power grid or a fleet of self-driving cars.

AI has the potential to provide significant benefits: Machines can scan mammograms for cancer signs overworked doctors might miss. They can spot complex fraud attacks. They can sift hundreds of thousands of documents to answer a question. What they cannot do is replace the human mind. In these uses and more, we will continue to require humans, the only living being capable of truly creative insight and with the ability to handle the unexpected, to be their guards. What we should fear then is not some AI overlord evolving out of a game-playing machine, but the misuse and abuse of computers by people who are either too trusting of their own creations or too greedy to care.

Photo: Ken Jennings, Watson, and Brad Rutter, via Wikipedia.

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Share

Tags

Computational SciencesSciencetechnologyViews