[Perspective] Our driverless dilemma
Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger. On page 1573 of this issue, Bonnefon et al. (1) explore this social dilemma in a series of clever survey experiments. They show that people generally approve of cars programmed to minimize the total amount of harm, even at the expense of their passengers, but are not enthusiastic about riding in such “utilitarian” cars—that is, autonomous vehicles that are, in certain emergency situations, programmed to sacrifice their passengers for the greater good. Such dilemmas may arise infrequently, but once millions of autonomous vehicles are on the road, the improbable becomes probable, perhaps even inevitable. And even if such cases never arise, autonomous vehicles must be programmed to handle them. How should they be programmed? And who should decide? Author: Joshua D. Greene