Intelligent Design Icon Intelligent Design

Artificial Intelligence: Good Aim, Wrong Target

Harrows_Bristle_Board_Bullseye.jpg

Good aim, at the wrong target, is always a miss. This describes much of the current work in Artificial Intelligence: Brilliant minds, clever programmers, amazing algorithms, all pointed at the wrong target with stupefying aim. Despite their brilliance, cleverness, and coding, someone will get hurt if we continue pursuing the type of AI in vogue.

A few weeks ago, Google put out guidance for research on preventing harm from AI. This last week, the Federal Government did the same. And a new research center just opened at Cambridge to ponder the issues: How will we prevent harm from AI? How will we prevent self-driving cars from killing people (or, as I mentioned recently, too many people)? How will we care for those losing their jobs to increased automation and robots? How will we ensure fair distribution of the money generated? What should we do or how can we prevent an advanced AI from going rogue? Can we build in an “off” switch? So forth and so on. The perils pile upon perils.

These are real issues. Even if Elon Musk and a few others have overstated the situation, AI can create far-reaching problems. We can build machines that harm humans. We’ve done it before. Nearly every technology humanity adopts can cause harm when misused or not monitored. This is why we have anti-lock brakes in our cars, kill-switches on trains, fuses in our homes, require medical prescriptions for many drugs, and so much more. But AI raises the stakes through speed and complexity: Computers follow millions of instructions per second, faster than our ability to follow them. Nor can we predict the full effects of any software, let alone the hyper-complex, self-adjusting algorithms modern AI systems use. And, thanks to its ever-shrinking size and cost, AI will appear more and more in cars, toys, appliances, and even unexpected places. Its impact will surround and, maybe, overtake, us. We will soon live in an AI-encased world. Some of those devices will likely cause little or no harm, but others — as the sad self-driving car death in Florida demonstrates — will.

So I agree with Google and the U.S. government and other researchers that we must monitor AI progress and work for guidelines to protect humans from what these machines can, or could, do. But I also believe much of the problem comes from aiming at the wrong target. If we corrected our aim, many of the concerns would diminish while we still enjoyed AI’s benefits.

AI theorists consider what they call Artificial Generalized Intelligence (or AGI) the ultimate goal: The intelligence of an AGI would match or beat — if you believe Musk, Kurzweil, and the other true believers — human intelligence. For these theorists, AI’s recent successes, including Google’s DeepMind, IBM’s Watson, and Tesla’s self-driving cars, are no more than steps toward that end. Like all goals, however, the pursuit of AGI rests not just on a desire to see what we can accomplish, but on beliefs about what is. That is, the hope for AGI begins by failing to appreciate human intelligence, assuming it to be the accidental by-product — an emergent condition with the illusion of free will — of random changes locked in a struggle for survival. If human intelligence is the epiphenomenon of an ever-changing collection of complexly arranged chemicals, then, by all means, let’s see if we can do better. But, if intelligence is the designed result of an engineered system, a exquisite multilayered composite exceeding any human created artifact, then pursuing its replacement might be not only a fool’s errand; it might, and likely is, dangerous and ill-conceived.

The misguided goals, the bad aim, of so much AI (though not all) arises from dismissing human uniqueness. Such AI becomes, not a tool to assist humans, but one to replace them. Whether it replaces uniquely human abilities, such as making moral judgments, or squeezes humans out altogether, as some robotics proposals tend to assume, someone will get hurt. Re-aiming AI toward “Assisted Intelligence,” rather than replacement-directed “Artificial Intelligence,” would bring more benefit and remove the scariest scenarios. Our tools do not cause our problems; it is how we use them.

Photo credit: Christian Gidlöf [GFDL, CC-BY-SA-3.0, CC BY 2.5, Public domain, GFDL, CC-BY-SA-3.0 or CC BY 2.5], via Wikimedia Commons.

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Share

Tags

Computational SciencesResearchScienceViews