Artificial Intelligence Has a Morality Problem

Ptld_Vintage_Trolley_at_City_Hall.jpg

If you ever get hit by a self-driving car, it may be my fault. It will not be only my fault, but I may have contributed to your demise. I’m sorry about that.

My contribution to your unfortunate death arrived through playing a round or two at MIT’s Moral Machine. MIT is using the machine to “crowdsource” opinions on how self-driving cars should respond to possible moral dilemmas.

The Moral Machine is an interactive version of the Trolley Problem, a thought experiment, an artificial moral conundrum first introduced in 1967, meant to tease out values. The Trolley Problem begins with a runaway trolley racing toward you. Five persons lie tied to the tracks. You have access to a lever that will route the trolley to another track, but, just before you flip the lever, you notice someone is tied to the side track as well. This creates the conundrum: Should you do nothing and let five people die, or do something and intentionally kill one person. Either way, someone dies through your action or inaction. The Moral Machine expands the variables of the Trolley Problem, but keeps the basic binary choice and outcome. It presents to you, as a disengaged observer, 13 scenarios a self-driving car may face. In each someone, or some ones, will get hurt, possibly killed. The victims may be in the car or crossing the road in front of the car. They may be old or young, honest or criminal, have the right-of-way or jaywalking, human or animal. The only guarantee is that someone will get hurt, probably killed. And each presents just two choices.

The problems the Moral Machine presents are caricatures of real moral problems. The program reduces everything to a question of who gets hurt. There are no shades of gray or degrees of hurt. It is, as is so often with computers, simply black or white, on or off. None of the details that make true moral decisions hard and interesting remain: Can the car safely swerve to the side of the road? Could the car drag along the side barrier in lieu of brakes? Can the car make a hard turn, creating a spin, to lose momentum? Could the car sound its horn so everyone flees? Is there an emergency brake? Can the engine be used to brake the car? Can the car be forcibly put into reverse? And so on. Real moral decisions, even difficult ones, contain a bundle of conditions, most unique to that moment, that complicate the choice. Whatever it is the Moral Machine presents, it is not representative of the moral choices we make each day. It is not even close to those we make while driving. As Russell Brandom comments:

The test is premised on indifference to death. You’re driving the car and slowing down is clearly not an option, so from the outset we know that someone’s going to get it. The question is just how technology can allocate that indifference as efficiently as possible. That’s a bad deal, and it has nothing to do with the way moral choices actually work. I am not generally concerned about the moral agency of self-driving cars — just avoiding collisions gets you pretty far — but this test creeped me out. If this is our best approximation of moral logic, maybe we’re not ready to automate these decisions at all.

To solve problems in a computer requires that we encode the problem into terms the computer can manage. That means reducing the shades of grey into a selection of discrete choices. MIT’s researchers replaced those shades of grey with a binary choice between gruesome outcomes. If we’re encoding music or video, if we’re summing a column of numbers, if we’re tracking our heart rates, if we’re trying to detect cats in photographs, that loss of information, those shades of gray not captured in the choices, often do not matter. But, in moral choices, it is the immediate circumstances, the thousand little details we handle holistically with our minds, that do matter. Only a strict utilitarian, someone who believes we can weigh and measure each life, placing them in neat order of who should die first and last, would think otherwise.

But even if we accept those limitations in exchange for the benefit increased automation may bring, the Moral Machine has other problems: Whose judgement matters? Crowdsourcing provides no guarantee of good judgement. We’ve arranged our legal system in a hierarchy that places those who, through practice and training, have better judgement higher in the ranks to handle the more difficult cases. We accept and believe that not everyone can make good judgements and that only a few of us should be trusted to make difficult choices. It is, at least in part, why we do not license children to drive: It is not that they lack the skills, they lack the life experience to make good judgements. Chipping the interesting bits off our moral choices, so they’ll satisfy the restraints of computing, leaves us solving, by a process foreign to how we make moral choices, problems that no longer bear any relation to those they supposedly represent. How can we expect to get good, moral results? If we must eliminate everything interesting about those choices to squeeze them into a computer, then what choices are being made? If we must trade the sound life experience for — and I do not know which is worse — crowdsourced opinions of faux moral dilemmas or the hard-coded values of a utilitarian, how can we trust the values behind those judgements? In some sense, those choices contain nothing more than the values of the programmers encoding the solutions or the data they gathered and used.

Artificially Intelligent machines, including self-driving cars, lack intelligence. They also lack any sense of morality. It’s on or off. Do or don’t. Hit or miss. Live or die. And whatever they “choose,” the machines will not — with all due respect to Hollywood’s ill-informed depictions — feel bad or good or anything at all. Machines do not “choose,” they do not feel, they do not care, and they cannot make moral choices. We need to stop pretending that humans are computers or that computers can replace humans. Humans make moral judgements; computers do what they’re told, even if it means someone dies.

Photo: Vintage Portland trolley, by Steve Morgan [CC BY-SA 3.0 or GFDL], via Wikimedia Commons.

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Share

Tags

Computational SciencesSciencetechnologyViews