A strange idea to solve the ethical problem of self driving cars
What happens if the car should choose between the life of the driver and the life of somebody else? I have a strange idea.
Friday, March 8, 2024
I was talking with a friend about the good old ethical problems of the full self-driving cars.
Let’s pretend to have built a wonderful self-driving car, and it works so well that everywhere in the world people are allowed to let their car drive for them. Legally, self-driving cars are allowed.
I want to emphasize full self-driving cars. What I mean is cars that can drive from the home door to work without the intervention of the driver. The driver is actually almost a passenger. The driver could sleep the whole trip, and it would not break any law. Cars are totally allowed to self-drive.
Now, imagine that one of these cars is in trouble. An emergency situation happens where the car should choose between two options:
- Sacrifice your life to save the life of another person
Example: a person crosses the road without looking. The speed of the car, the environment, and the position of the person leave the car with only two choices: turn really hard the steering wheel left or right, saving the life of the person, but probably killing the driver, as the car is also going to crash against a wall OR will try to brake but probably the person will die, while the driver stays safe. - Choose between two people
Example: two people are crossing the road, and the car must choose whether to turn right and save the person on the left or the opposite.
I doubt that this kind of situation could really happen in reality or, if it can, it has an extremely low possibility of happening. I think about this more like a nice mental and theoretical exercise.
Then we talked about this choice, and I tried to explain my point of view. In my opinion, there are some really good pros the self-driving cars can have.
Drastically drop of deaths from road accidents
The computer that drives the car is WAY faster than you are. The average human reaction time is 250 milliseconds [ 1 ] (sidenote: I tried some online tests and my reaction time is rarely 250 ms 😅) , which could sound quite good; a fourth of a second is a really short time. The problem is that a computer can do it in microseconds, and that is a REALLY short time.
I presume that the operations the car needs to elaborate in order to drive are huge, and the processor could not perform at its best, but the reaction time is still WAY shorter than ours.
On average, 1.19 million people die from road accidents every year, and it is the leading cause of death in children. More than half of the deaths are pedestrians, cyclists, or motorbikers. [ 1 ] (sidenote: Of course, WHO has this kind of information.)
Imagine if ALL the cars were self-driving. Due to the extremely shorter reaction time in case of emergency and, let’s face it, the full respect of the rules, we could save millions of people and especially children. And the case where a car has an accident becomes very rare. Consequently, a situation where the car has no other choice than to choose between the life of the driver and the life of other people becomes extremely rare.
Of course, I think it’s worth remembering that this is a mere exercise for our minds, and we are simplifying a lot. We assume, for this exercise, that the cars have an incredible level of accuracy and can drive in all cases safely, without any problems.
Fewer traffic jams
Moreover, even if this is not directly related to deaths and accidents, let’s think about the waveform that the traffic jam has. There is a queue at the traffic light. The light turns green, and the first car starts. After a little delay, the second one starts. After another little delay, the third one starts. You get it. When the traffic light turns red again, the fourth one can only start and stop again.
Imagine now if the cars could communicate with the traffic lights AND with the other cars. The cars could start all at the very same moment when the traffic light becomes green. And stop all at the same time when the red light lights up. In this way, the typical traffic shockwave is eliminated, and all the traffic moves in a constant, linear way.
Not strictly related to the topic, but I find this theory really interesting and fascinating. And something that totally makes sense to me.
The Pharmaceutical Example
Now, let’s do another example.
When you take a pill, even the most commonly used, the leaflet clearly communicates that you can use the pill to treat whatever you have. BUT in an incredibly small number of people, the same pill can cause brutal side effects, death included.
The pharmaceutical company is clearly saying that the pill works, and also very well. But you must know that in extremely rare cases the pill can also kill you. And then it’s your choice if the risk is worth it. If you take the pill, and you are so unlucky to die, well… we told you that could happen!
No one puts some ethical problem on pills and its side effects. And in my opinion, this happens for a main reason: the only person that gets hurt if you take that pill is you. No one cares if you decide to take the risk and you die; well… sorry for you.
I am not criticizing; it is totally normal and it’s okay. But… why can’t we adapt the same way of thinking to self-driving cars?
Kill the Driver!
And here comes the idea. Follow me for a moment.
We saw the pharmaceutical example. This pill saves many lives, but in an extremely rare case, you could die taking it.
Is the risk acceptable?
We all do vaccines and take pills, so yes.
Go back to self-driving cars.
What if, in the same way, the self-driving cars save millions of lives, but in extremely rare cases, the car could kill you because it is trained to do so in some exceptional situations.
If the car is in the bad position to choose between you, the driver, and somebody outside the car, the car will always choose to save the people outside the car. And to kill you.
Exactly like the pills, the cars save millions of lives, but in very rare cases, they can also cause your death.
Is the risk acceptable?
Well… if this very same risk is acceptable for pills, why not for cars?
So, the idea is that to solve the ethical problem of the car choosing between the life of two (or more) people, it is enough to program the car to choose always the life of other people and eventually to kill or leave the driver seriously injured. And to communicate this behavior [ 1 ] (sidenote: Maybe with statistical data on the probability of this event) to the driver when they buy the car. A sort of leaflet of the car.
In this way, the driver takes the responsibility of the choice to use a self-driving car, and by doing so, he takes also the risks and agrees to eventually be killed in the extremely rare situation where the car must choose between the life of a pedestrian and the life of the driver, choosing always to save the pedestrian.
If we think it’s worth accepting the risks of some pill or some vaccine because of the millions of lives saved, why not use the same mentality for self-driving cars?
Yes, self-driving cars are not ready yet, and it seems like we are far from it. But somehow, I think that this will be inevitable, and one day we will automate also the act of driving. And to me, it makes total sense [ 1 ] (sidenote: Also If I love to drive. But I would like to choose if drive the car and enjoy it, or just let the car drive for me.) , because in theory, it is a really simple thing to automate. The reality presents some challenges though.
I am not sure if the whole thing makes sense to somebody, or if I am not thinking of some big consequence using this approach, but… I thought it could be worth to write it down.
I hope I left you with something to think about 🙂
Ciao 👋