All you need to know about the ethics of self driving cars

Self-driving vehicles are already here, and we share our roads and freeways with them. There are certain things we need to know about the ethics of self driving cars. We at Tech Insider 360 dived deep to find answers to some sensitive ethical questions around it.

In light of driverless vehicles, what are the ethical problems we need to worry about?

A lot of self-driving car controversy and ethical theory has concentrated on tragic dilemmas, such as hypotheses in which a car has to determine whether to drive over a group of schoolchildren or fall down a cliff, killing its passengers.

A self-driving car killed a pedestrian in Tempe, Arizona, in a recent car crash. It showed us that there are challenges and ethical dilemmas in daily circumstances at crosswalks, intersections, and turns.

Suppose a driver hits the brakes to avert a collision with a pedestrian crossing the road illegally. In this case, the driver must make a moral decision that passes the pedestrian’s threat to the people in the vehicle. Self-driving cars will soon have to make such ethical decisions on their own.

Still, it may be a tough job to agree on a universal moral code for vehicles, as suggested by a survey of 2.3 million individuals from around the world.

The survey, called the Moral Machine, set out 13 situations in which it was inevitable for everyone to die. In scenarios containing a combination of factors, respondents were asked to choose who to spare: young or old, wealthy or poor, more individuals or less.

Who needs to take the lead in analyzing these ethics of self-driving cars, philosophers, politicians, or the automotive industry?

The truth is, a lot of it is going to be what the industry decides to do. But then, at some point, policymakers would have to step in. And there are going to be liability issues at some point.

Questions regarding violating the law remain. The people at the Stanford Center for Automotive Research have found out that there are times when regular drivers do all kinds of illegal things that make us safer.

For instance, you merge on the highway, and you travel at the speed of traffic, which is higher than the speed limit. Someone crosses your lane, and you swerve into an oncoming lane momentarily.

Is the “driver” legitimately guilty of such stuff in an autonomous vehicle? Is the automaker guilty of it legally? How are you handling all that? That will need to be sorted out. Honestly, we don’t know how this will work out just because it has to be.

The largest machine ethics survey ever published in Nature magazine found that many of the moral principles that govern a driver’s decisions differ by country.

For example, in a situation where any mixture of pedestrians and passengers died in a crash, it was less likely that people from relatively wealthy countries would spare a pedestrian who illegally entered traffic.

When the authors evaluated individuals with at least 100 respondents in the 130 nations, they were split into three categories.

One includes North America and some European nations, where Christianity has traditionally been the dominant religion. The second group contains strong Confucian or Islamic cultures, such as Japan, Indonesia, and Pakistan.

Central and South America and France, and the former French colonies compose the third group. The first group demonstrated a greater preference than the second group for sacrificing older lives to save younger ones.

The Ethics of Self Driving Car Accident Algorithms: An Applied Trolley Problem

ethics of self driving cars

Self-driving vehicles pledge that they would be safer than cars driven manually. Yet, they can’t be 100% secure. Collisions are inevitable at times.

So it is vital to program self-driving cars as to how they should react to situations where crashes are highly likely or unavoidable.

Recently, the accident-scenarios that self-driving vehicles could face are compared to the primary examples and dilemmas associated with the trolley problem.

The trolley issue is the much-discussed series of experiments in metaphysical thinking. There is a runaway trolley, and sacrificing one person is the only way to save five people on the tracks.

Different variants of these trolley cases differ in how to compromise the one to save the five. The most simple models are said to foreshadow the question of how autonomous vehicles are programmed.

We discuss this enticing comparison objectively in this post. We describe three fundamental ways in which the ethics of self-driving car accident-algorithms and the trolley problem philosophy vary from each other.

These concern: (i) the specific decision-making situation faced by those who determine how to configure self-driving cars for accident management; (ii) moral and legal responsibility; and (iii) risk and uncertainty in decision-making.

We isolate and describe various fundamental issues and complexities within the ethics of self-driving car programming in addressing these three areas of disanalogy.

What kind of regulatory thickets are headed for by driverless cars?

When people talk about self-driving vehicles, much of the emphasis falls on entirely driving the autonomous car. But this is just an automation progression, little by little.

Self-driving-type functions are stability control and anti-lock braking, and we only get more and more of them. In Silicon Valley, autonomous vehicles get a lot of coverage, but this is not being put into practice by conventional automakers.

So, around all this, you might imagine various forums and standards. Will this be a series of gradual steps, or should it be a massive leap to a self-driving car in Google-style?

It would prefer one of those approaches over the other to create different regulatory regimes. If it’s the right policy, I’m not sure, but gradual moves might be a successful policy.

But it will also be perfect from the carmakers’ point of view and less helpful from Google’s point of view.

And if they could try to influence how the requirements go in a manner that benefits their technology, it could theoretically help a company’s benefit.

In addition to worrying about the ethical stuff, this is something that businesses going into this sector have to think about strategically.

Is it ethical to create self-driving cars whose decisions would impact the livelihood of the driver and surroundings?

Self-driving cars will boost everyone’s lifestyle. Getting a single self-driving car on the road will minimize congestion by affecting the traffic flow of at least 20 human-controlled vehicles around it.

Instead of causing traffic delays frequently created by humans breaking up and slowing down violently. Autonomous cars help ensure travel always. As traffic is a big problem for cities worldwide, this is one of the many reasons why self-driving cars are justified.

Self-driving cars can reduce the time needed for unproductive and stressful driving. What can use the time spent commuting to and from one’s destination for other activities if one is not susceptible to motion sickness.

According to The American Census Bureau, the typical commute period is 25.4 minutes. But major cities such as New York, Los Angeles, and San Francisco can see each way for up to 30 minutes.

Also, intense commuters who spend more than an hour each way home to work are rising faster than ever. Productivity levels will improve with autonomous cars as people can work, sleep, eat, or complete other tasks as the car drives.

What other relevant ethical problems do we see coming down the road?

Right now, based on our psychology, we make such instinctive decisions as humans. And some of the time, we make those choices wrongly.

We make mistakes; we mishandle the wheel. But we make gut decisions that would be less selfish if we were programming our vehicle than what we might do.

In class discussions, there is a debate whether autonomous cars should be taught selfishness. That is should it save the passengers and the driver rather than the pedestrians outside.

Frankly, if I programmed it for driving alone versus getting my 5-year-old son in the vehicle, my response would be very different.

What if driverless cars are fed with data from previous road collisions. So they can decide how much space they must give the pedestrians? It could make sense to the system.

Suppose a high incidence of frequent crashes in a specific spot in the neighborhood has historically occurred. Why not leave some extra space for the pedestrians?

By giving them smaller buffers, the algorithm will unwittingly punish the bad. Also slightly increase their chance of being struck while out for a stroll.

It’s almost a sure bet that machines will be better drivers than humans as self-driving car technology progresses. Crashes can go down by 90 percent, saving many thousands of lives and billions in medical costs, one estimate says.

Holding your hands on the steering wheel can eventually become unethical, even if you don’t trust autonomous cars.

Leave a Comment