All you need to know about the ethics of self driving cars

There are already self-driving vehicles, and we share our roads and freeways with them. They are one of the most technologically advanced inventions that humans have developed to date.

For obvious reasons, there will be sensitive ethical questions around it, just like any new technology. In this article, we dived deep into finding answers to some of the most burning questions surrounding the ethics of self-driving cars.

In light of driverless vehicles, what are the ethical problems we need to worry about?

A lot of self-driving car controversy and ethical theory has concentrated on tragic dilemmas, such as hypotheses in which a car has to determine whether to drive over a group of schoolchildren or fall down a cliff, killing its occupants. Yet severe cases are certain kinds of conditions.

As the most recent collision, in which a self-driving car killed a pedestrian in Tempe, Arizona, shows, there are far more challenging and more comprehensive ethical dilemmas in the ordinary, daily circumstances at each crosswalk, turn, and intersection.

Suppose a driver hits the brakes to avert a collision with a pedestrian crossing the road illegally. In this case, she must make a moral decision that passes the pedestrian’s threat to the people in the vehicle. Self-driving cars will soon have to make such ethical decisions on their own.

Still, it may be a tough job to agree on a universal moral code for vehicles as suggested by a survey of 2.3 million individuals from around the world.

The survey, called the Moral Machine, set out 13 situations in which it was inevitable for everyone to die. In scenarios containing a combination of factors, respondents were asked to choose who to spare: young or old, wealthy or poor, more individuals or less.

The largest machine ethics survey ever published in Nature found that many of the moral principles that govern a driver’s decisions differ by country.

For example, in a situation where any mixture of pedestrians and passengers died in a crash, it was less likely that people from relatively wealthy countries with strong institutions would spare a pedestrian who illegally entered traffic.

When the authors evaluated individuals with at least 100 respondents in the 130 nations, they found that the nations could be split into three categories.

One includes North America and many European nations where Christianity has traditionally been the dominant religion; the other contains countries with strong Confucian or Islamic cultures, such as Japan, Indonesia, and Pakistan.

Central and South America and France and the former French colonies compose the third group. The first group demonstrated a greater preference than the second group, for example, for sacrificing older lives to save younger ones.

Who needs to take the lead in analyzing these ethics of self-driving cars, philosophers, politicians, the automotive industry?

The truth is, a lot of it is going to be what the industry decides to do. But then, at some point, policymakers would have to step in. And there are going to be liability issues at some point.

Questions regarding violating the law remain. The people at the Stanford Center for Automotive Research have found out that there are times when regular drivers do all kinds of illegal things that make us safer.

You merge on the highway, and you travel at the speed of traffic, which is higher than the speed limit. Someone crosses your lane, and you swerve into an oncoming lane momentarily.

Is the “driver” legitimately guilty of such stuff in an autonomous vehicle? Is the automaker guilty of it legally? How are you handling all that? That will need to be sorted out. Honestly, we don’t know how this will work out just because it has to be.

The Ethics of Accident-Self-Driving Car Algorithms: An Applied Trolley Problem

Self-driving vehicles pledge that they would be safer than cars driven manually. Yet, they can’t be 100% safe. Collisions are inevitable often.

So it is vital to program self-driving cars as to how they should react to situations where crashes are highly likely or unavoidable.

Recently, the accident-scenarios that self-driving vehicles could face have been compared to the primary examples and dilemmas associated with the trolley problem.

The trolley issue is the much-discussed series of experiments in metaphysical thinking in which there is a runaway trolley, and sacrificing one person is the only way to save five people on the tracks.

Different variants of these trolley cases differ in how to compromise the one to save the five. The most simple models are said to foreshadow the question of how autonomous vehicles can be programmed.

We discuss this enticing comparison objectively in this post. We describe three fundamental ways in which the ethics of self-driving car accident-algorithms and the trolley problem philosophy vary from each other.

These concern: (i) the specific decision-making situation faced by those who determine how to configure self-driving cars for accident management; (ii) moral and legal responsibility; and (iii) risk and uncertainty decision-making.

We isolate and describe various fundamental issues and complexities within the ethics of self-driving car programming in addressing these three areas of disanalogy.

What kind of regulatory thickets are headed for by driverless cars?

When people talk about self-driving vehicles, much of the emphasis falls on entirely driving the autonomous car. But this is just an automation progression, little by little.

Self-driving-type functions are stability control and anti-lock braking, and we only get more and more of them. In Silicon Valley, autonomous vehicles get a lot of coverage, but this is not being put into practice by conventional automakers.

So, around all this, you might imagine various forums and standards. Will this be a series of gradual steps, for instance, or should it be a massive leap to a self-driving car in Google-style?

It would prefer one of those approaches over the other to create different regulatory regimes. If it’s the right policy, I’m not sure, but gradual moves might be a successful policy.

But it will also be perfect from the carmakers’ point of view, and less useful from Google’s point of view.

And if they could try to influence how the requirements go in a manner that benefits their technology, it could theoretically be to a company’s benefit.

In addition to worrying about the ethical stuff, this is something that businesses going into this sector have to think about strategically.

Is it ethical to create self-driving cars whose decisions would impact the livelihood of the driver and surroundings?

Self-driving cars will boost everyone’s lifestyle. With self-driving vehicles, traffic flow changes are encouraged by the National Science Foundation: getting a single self-driving car on the road will minimize congestion by affecting the traffic flow of at least 20 human-controlled vehicles around it.

Instead of causing traffic delays that are frequently created by humans breaking up and slowing down violently, autonomous cars help ensure travel always. As traffic is a big problem for cities worldwide, this is one of the many reasons why self-driving cars are justified.

Self-driving cars can reduce the time needed for unproductive and stressful driving. What can use the time spent commuting to and from one’s destination for other activities if one is not susceptible to motion sickness.

According to the U.S., the typical commute period for The American Census Bureau is 25.4 minutes (WYNC), but major cities such as New York, Los Angeles, and San Francisco can see each way for up to 30 minutes.

Also, intense commuters who spend more than an hour each way home to work are rising faster than ever. Productivity levels will improve with autonomous cars as people can work, sleep, eat, or complete other tasks as the car drives.

What other relevant ethical problems do we see coming down the road?

Right now, based on our psychology, we make such instinctive decisions as humans. And some of the time, we make those choices wrongly.

We make mistakes; we get the wheel mishandled. But we make gut decisions that would be less selfish if we were programming our vehicle than what we might do.

In class discussions, it is often debated whether autonomous cars should be taught selfishness. That is should it save the passengers and the driver rather than the pedestrians outside.

Frankly, if I programmed it for driving alone versus getting my 5-year-old son in the vehicle, my response would be very different.

What if driverless cars are fed with data from previous road collisions to decide how much space they must give the pedestrians? It could make sense to the system.

If a high incidence of frequent crashes in a specific spot in the neighborhood has historically occurred, why not give leave some extra space for the pedestrians?

Well, this may also mean that driverless cars in more impoverished areas, where individuals have signed smaller accident payments, would be less vigilant.

By giving them smaller buffers, the algorithm will unwittingly punish the bad and slightly increase their chance of being struck while out for a stroll.

It’s almost a sure bet that machines will be better drivers than humans as self-driving car technology progresses. Crashes could be cut by 90 percent, saving many thousands of lives and billions in medical costs, one estimate says.

Holding your hands on the steering wheel can eventually become unethical, even if you don’t trust autonomous cars.

Leave a Comment