Home Science Tech A Self-Driving Car’s Choice: Who Lives and Who Dies?

A Self-Driving Car’s Choice: Who Lives and Who Dies?

When faced with a choice of either crashing into a young person or a group of adults, what will a self-autonomous vehicle choose? And who is ultimately responsible for this choice — the driver or the car manufacturer? Who provides the ethical framework on which to base decisions for the vehicle? On what basis do they prepare the framework? With self-driving cars becoming more prevalent, these are a few questions we will try to answer in this article.

More than 30,000 people are killed each year in accidents just in the U.S., and more than 2 million people are injured. Almost 95 percent of accidents are attributed to a misjudgment from the driver.

Alternatives

Take, for example, if you were driving a car, and you were forced to make a decision of crashing into one object to avoid another worse alternative. This would be termed as a sudden reaction, which it was, and not a deliberate act based on the intent of malice. But if a car manufacturer programmed a self-driving car to make such a decision based upon similar circumstances, that would amount to premeditated murder. For example: “In this situation, crash into this vehicle.”

Self-driving cars are designed to reduce accidents by cutting off the unstable human error from the scenario. This is the main premise of developing this kind of technology; with the luxury aspect secondary. Self-driving cars do not get drunk, tired, or angry.

Principles

When programmers code in a basic law for self-driving vehicles like “minimize harm,” that, in itself, leads to many varied interpretations based on situations, people involved, and the cost of a life. An example would be the two choices of truck and bike. If you program the car to crash into a truck because that’ll most likely result in fewer injuries; what if there was a baby inside? Or a bunch of babies?

Who decides the moral principles of a self-driving car? (Image: pixabay / CC0 1.0)
Who decides the moral principles of a self-driving car? (Image: pixabay / CC0 1.0)

If you program the car to crash into the biker, then the consequences would be much higher, as expected. So do you allow the car’s program to make such a decision based upon the principle “minimize harm,” or do you feed a million possibilities into it?

Saving your own life first is the instinctual reaction of any human being. In this case, what decision does the car make — save your life or other lives — which life does it consider important? Would you buy a car that prefers saving other lives or your own life?

As it stands, there is no regulatory body to decide the principles on which to base the programming of a self-driving car. If a regulatory body were set up, who decides — the policymakers in the government, the manufacturer, or a consensus from the population? The fact is, no one knows.

Ethics

The manufacturer wants to release as many vehicles out there as possible, and the government wants revenue in the form of taxes. That is where each of their obligations lies. So what happens to Joe Public?

Come to think of it — what is right and what is wrong? This has been the burning question since time immemorial. In Star Trek, Commander Spock says: “Logic dictates that the needs of the many outweigh the needs of the few.”

MIT built a “Moral Machine,” which displays moral dilemmas, and tests us by making the choice of doing what is considered the lesser of two evils. It works like a game; you can check it out here.

Ethics and morality have always been non-identical in different cultures. So which culture do we choose? It all comes down to your beliefs and perception of what is right.

As it stands, there is no regulatory body to decide the principles on which to base the programming of a self-driving car. (Image: pixabay / CC0 1.0)
As it stands, there is no regulatory body to decide the principles on which to base the programming of a self-driving car. (Image: pixabay / CC0 1.0)

Lab rats

The argument in favor of launching self-driving vehicles now stems from the fact that mankind, on its own, doesn’t seem to do well. So does this mean we let all control of our fates into the hands of the machines? Where will that leave us?

With fewer responsibilities and so much more dependencies, does this make for a better human society? The counter-argument for this is that we do not give up control over things we can do, because this reduces our ability to accomplish things gradually, until, in the end, we are left with a population that’d need help getting up from the bed.

The history of mankind is replete with mistake after mistake. It seems like we never learn, and are eager to repeat the follies. Do we stop deciding once and for all, and give that job to the AI machines — will that make this better? Or will that be our biggest mistake yet?

Follow us on Twitter or subscribe to our email list

Vision Times Staff
Vision Times is a kaleidoscopic view into the most interesting stories on the web. We also have a special talent for China stories — read About Us to find out why. Vision Times. Fascinating stuff.

Most Popular

Hold on, 12 Young Hong Kong Protesters!

Carry on, young Hong Kong protesters! A few months ago, 12 young Hong Kong protesters attempted to flee Hong Kong to Taiwan without success. Some speculate...

Chinese Calligrapher Creates Artwork Reflecting the Pandemic Era

Liu Xitong is a renowned calligrapher from China who recently took part in a documentary by NTD Television titled "When the Plague Arrives." The...

Climate Change Report Claiming ‘Point of No Return’ Exposed

A recent fear-mongering report that claimed humanity is ‘beyond the point of no return’ when it comes to climate change has been debunked by...

Beijing Is Weaponizing Biotech for New Age Biological Warfare

The COVID-19 pandemic has shined a spotlight on the dangerous possibility of biological warfare. And China seems to be preparing for such a war...