There’s a new dilemma in the rulebook of driverless car ethics today: who to kill? Don’t get frightened just yet, however. While it may seem like a panicky subject, the bottom line is that this is a fictional scenario from the future, presented in a new study which appeared in Science this week. The research did not just focus on the driverless cars, however, but also on the humans boarding them.
The question: who would the car choose to kill in a scenario in which it would unavoidably have to kill someone? Better yet: what do the people that would be aboard these cars think of this choice? Surprisingly, the study found that a majority of people agree with that philosophy set upon by Jeremy Bentham a couple centuries ago: choose the way to do less damage. However, when it came to actually owning a car based on that principle, things got a little complicated.
The scenario implies a person moving along in a driverless car, watching a video, without a care in the world. The car is coming up to a crossroads where the lights are red and five pedestrians are slowly crossing the street. The car suddenly has a malfunction and can no longer break. The choice would be whether to kill the people on the street or to sharply veer sideways and kill the passenger in the process. Note that the experiment did not account for the possibility of the passenger surviving (which may have been an opinion changer).
The results of this test were simple: most people agreed that the car should go for Bentham’s philosophy and kill as few people as it could. However, being subsequently asked whether they would themselves like to own such a “passenger-sacrificing” car, the answers tended more towards the negative. This may have to do a lot with humans’ instinct of self-preservation.
Should Your Driverless Car Discriminate between the People in Front of It?
The study went even further and presented respondents with subtler choice of answers. One scenario had the car crashing into the sidewalk, killing a group of other pedestrians who were accompanied by animals. Another one pitted the choice to kill a homeless person, criminal, and baby, with the choice to kill a large man, a large woman, and an elderly person.
Needless to say, the ethics of these scenarios are beyond complicated and Kant’s option to not do harm to anyone is simply not possible. So, what do you think about the new entry in the book of driverless car ethics today: who to kill? More importantly, though, would you actually buy a driverless car with a sacrifice algorithm installed?
Image source: depositphotos.com.