Apply Now

How Will Self-Driving Cars Handle Moral Issues?

Posted by Roger Ryall

auto-loan-solutions-self-driving-moral-issues

The title question is one that comes up frequently on discussion forums. It’s a valid question. The install base of self-driving cars is expected to reach the tens of millions by 2020. People want to know what to expect from their new smart vehicles.

We touched on this topic briefly in a past post but the topic deserves a more in-depth revisit. There is a lot of misconceptions and details related to the title question that requires further exploration.

Let’s begin by answering the question posed in the title.

The Answer

Bluntly put, automated cars won’t because they can’t. Machines don’t handle moral issues. People do.
Morality is normative. It’s built on what a person or people think proper behaviour should be. Machines can only follow instructions and execute commands.

For example, an elevator doesn’t think about the morality of closing its doors and moving to another floor. Once a button is pressed, an internal timer counts down and then it performs an action. To prevent injury, a sensor can override the closing function if it detects something trying to pass through while the doors are closing. Thus, injury is prevented and no thought from the elevator is needed.

What to Expect From Self-Driving Cars

Obviously, people and lawmakers are expecting a lot more from self-driving cars than they do from an elevator.

A self-driving car needs to be able to recognize different road signs, follow traffic rules, adapt to varying weather conditions and handle road emergencies. What’s more, the car will need to deal with these situations better than a human driver. Thankfully, they already can.

Consciousness is a beautiful thing but it often leads to distraction. How many of us have tried to solve an existential crisis on the ride to work? In this respect, the onboard artificial intelligence (AI) of a self-driving car has the advantage. It can’t be distracted, unlike 90% of Canadians.

Does this mean that the AI will always make the best decision? Well no. One of the issues programmers are trying to sort out is, ironically, how the AI deals with bad drivers. There are two ways to tackle the issue:
A. Have the AI learn more aggressive, ‘more human’ driving
B. Have only autonomous vehicles on the road

The direction taken depends on the adoption rate of the vehicles and lawmakers.

What Occupants Need to Know About Their Self-Driving Car

The onboard AI in a self-driving car, like all programs, follows a set of encoded rules. Any decision-making on its part is done by comparing data sets and acting. The weight of a data set over another is up to the programmer and lawmakers.

The moral question for manufacturers is how to communicate the car’s decision-making process to occupants. Occupants need to know what the car will do in a trolley dilemma for instance to give consent.

Self-driving cars are only a few years from being commercially available. There are still a lot of fears and uncertainties about the new technology. People will be trusting their lives to it after all. How the vehicles will react in different scenarios will depend on lawmakers, manufacturers and society. The cars themselves won’t be making any moral decisions. The big moral question related to the technology is how to ensure its users understand how it works.

Apply Now!

Apply Now