<img height="1" width="1" src="https://www.facebook.com/tr?id=1599948400306155&amp;ev=PageView &amp;noscript=1">
(EU) +49-711-2195-420

(US) 1-866-468-5210

   FAQ

  

Blog

Morals of AI: The Hidden Issues of Vehicle Autonomy

ai-morals

Autonomous driving promises to make mobility safer, faster, and cleaner for all of us. Cars using V2X (vehicle-to-everything) communication will be able to connect to not just each other, but also to the road infrastructure, and practically any other relevant entity. Experts believe this network of connected vehicles will manage traffic more efficiently, leading to reduced driving times and far fewer accidents.

Yet some downsides of autonomous technology are often overlooked. In fact, the use of AI in self-driving cars could introduce unintended consequences and entirely new types of problems that we’re just starting to learn about.

Related reading: Challenges of Developing Autonomous Vehicles

The moral conundrum of the trolley problem

A better known issue around teaching self-driving algorithms is (any variation of) the trolley problem. In essence, this thought experiment aims to explore the depths of a very important moral decision with practical implications: choosing whom a car should try to prioritize in an accident.

More specifically, should the car swerve to save the lives of two pedestrians crossing the road, killing another one on the other side of the road? And what if those two pedestrians are an elderly couple, while the to-be-killed pedestrian is a 10 year old? Should the algorithm favour law-abiding citizens over jaywalkers? And should it consider socioeconomic factors when making the decision?

According to the Moral Machine experiment, the findings of which were published by MIT, the problem is further complicated by differences across cultures. Researchers have found, for instance, that respondents from Asia and the Middle East were more inclined to save old people rather than younger pedestrians, and wealthy ones over their less fortunate counterparts.

Developers of autonomous driving algorithms already have to face (and answer) these questions, essentially determining not only how safe self-driving cars will be, but also from whose point of view that safety will be guaranteed.

AI becoming too smart for its own good

Provided that tomorrow’s passengers can rely on effortless mobility thanks to self-driving cars, more and more people may decide to move to more affordable real estate in the suburbs of cities. Why not move further away from work when you can simply work or even sleep during your morning commute?

This phenomenon, however, could have unplanned consequences. More vehicles would mean more driving, contributing to the already severe climate catastrophe we are facing. Also, those self-driving cars would need to park somewhere while waiting for their passengers to get off work in the afternoon.

Due to the limited number of spaces, parking in large cities tends to be a difficult and costly affair. Instead, a recent paper suggests that autonomous cars programmed to optimize for economical operation would just cruise around empty, or worse: coordinate with each other to create congestion.

From the perspective of an AI algorithm, that seems perfectly rational: driving at slow speeds is far cheaper for an autonomous vehicle, so why not create the circumstances where hundreds of cars can optimize for slow travel as they wait?

Related reading: The Unseen Tech Powering Self-driving Cars

Both automotive developers and policymakers will need to take into consideration this “ruthless rationality” of artificial intelligence, adjusting the rules of operation of tomorrow’s vehicles to steer AI to better decisions.

Racist bias in self-driving algorithms

Artificial intelligence capable of machine learning is trained using vast amounts of data input. Simulation is a widely used technique to teach self-driving algorithms, enabling them to confidently handle millions of traffic situations. Therefore, the quality of that input data is crucial.

And herein lies another problem that was so far overlooked: according to a 2018 article, the engineers developing these algorithms, themselves mostly Asians or Caucasians, are creating simulated environments to their image. That is, with no pedestrians of colour roaming the virtual streets. The result? Most of today’s self-driving software doesn’t fare very well in detecting dark-skinned persons in traffic, making them far more dangerous for minority groups.

This underrepresentation of racial diversity in simulation software is just an example of how developers of autonomous vehicles need to constantly find new ways to think outside the box. They need to question their assumptions and decisions, because when built into self-driving cars, any overlooked detail or bias may lead to unforeseen consequences once the technology hits the road.

Smart tooling may not be the silver bullet solution to these challenges, but it can definitely help carmakers develop high-quality automotive technology faster. Find out how codeBeamer ALM can help your automotive development efforts, and reach out to us with your questions!

  

Tags