German Rules for autonomous vehicles can be read about here.
These rules are a dangerous fantasy, created by well-intentioned ethicists but impossible to implement. They show clearly how the problem with autonomous vehicles is a non-trivial non-starter in their implementation in the real world.
That means that when an accident is unavoidable, the software must choose whichever action will hurt people the least, even if that means destroying property or hitting animals in the road, a transport ministry statement showed.
The software may not decide on its course of action based on the age, sex or physical condition of any people involved.
First, choosing whatever action will hurt people the least is something even humans cannot do. It’s the easy out of AI Utilitarianiam I have written about before. Glass and steel flying apart at speed is the definition of a chaotic system. Humans cannot predict the outcome of an accident, or have perfect knowledge of what the environment is. Accidents by definition happen faster than such decisions are possibly made consciously, which ticks behind experience at a good second of time. We might pretend that we made decisions after the accident, but in a real one your automatic systems are not weighing people’s gender identity.
Second, are you telling me that a car, made by fallible engineers, is going to be able to determine the age, gender or physical condition of humans around it going 65mph down the highway?
To say a car won’t make a decision based on the age of a subject, for instance, means you need to train your algorithms on age to ensure they ignore it. Otherwise, and this is the point, in a neural network you will never know why the decisions are made unless you control the training sets. And you can’t control the world, which is where people and cars exist.
Let’s say that autonomous cars learn that driving near older people is less or more dangerous, even though they don’t know the people are older. Did they violate this rule? What if they learn that certain races drive better or worse through real world experience. Is this an unethical conclusion? How will we know? Machine Learning algorithms don’t talk and explain their actions or follow rules like this.
There is no example of an entity besides a Divine one that would have a shot at knowing the world well enough to follow rules like this, and there is no way a Chevy is going to make validated ethical choices.
It’s why the answer to the trolley problem for humans is to not allow people to stand on train tracks. And the solution to autonomous vehicles is to pass laws that require a human being to be behind the wheel at all times.
Now, on the other hand if this list of rules is a way for German Ethicists to troll German auto manufacturers with impossible rules so they can continue to drive in the fast lane on the Autobahn: bravo.
Painting by Simon Stålenhag