Self-driving cars are one of the first products
employing artificial intelligence that will be available for mass consuming. Whether,
as our usual cars driving themselves, or as Uber or Google pods coming to our
doors to pick us up and drop us wherever we tell them to, the future is
presented without a steering wheel.
All this rises one very important question: how
will cars behave upon an accident? What will happen when the artificial
intelligence faces a sure human death?
This difficult issue is now being tackled by
automakers, and the sad answer is that there is no right answer for this
ethical debate.
Let’s give an example about a typical risk
situation a self-driving car is going to face. Two children suddenly jump into
the road. The car is carrying one
passenger, and its AI module can choose between two priorities: hitting the
child or swerving and crashing another upcoming car. What criteria will it use
to decide?
If the car’s brain is only guided by
quantitative data, swerving would be the right decision, because killing two
people is worse than killing just one (2>1).
Problem solved? If cars are to be programmed
this way, who is buying a car that is not programmed to protect its passengers,
but to kill them if necessary?
And if self-driving cars don’t reach success
because of this ethical debate, are societies sticking to the use of our
old-fashioned, human-controlled cars?
Some experts agree on the fact that the ethical
debate might not be on how we program autonomous vehicles, but on how much
longer we are using our dangerous, susception of human distractions, manual
cars.
As you see, this question doesn’t seem to have
a proper answer just yet. Please leave your comment down below on what you
think an autonomous car should do in the event of an accident.
What we can be sure about is that philosophers
will be soon working aside computer engineers around Silicon Valley.
We have to look for the possibility of stopping a car suddenly by arresting a wheel I.e)performing drift
ReplyDeleteOf course always there is a possibility of not causing casualties the AI will take it, but here we are assuming a scenario without a win situation, and how will the computer make its decision.
DeleteAI will come up at some point with a new infalible technology for braking ;)