At first glance what the Partnership on AI is trying to do is entirely Noble. They have many important players as partners, including the Google Deepmind team, the ACLU, academics, Accenture, Salesforce, NGOs, the works. They clearly want to facilitate deep discussion and consideration around AI and understand what its social disruption might entail. But…I am going to poke a little at the first tenant of the organization:
1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
My issue here is the clause “as many people as possible.” I think this is just a modern PR rephrasing of Utilitarianism.
Utilitarianism is an ethical and philosophical theory that states that the best action is the one that maximizes utility, which is usually defined as that which produces the greatest well-being of the greatest number of people, and in some cases, sentient animals.
The problem here is the Bentham and others think that the sum is the important thing, that we can just “add up” experience to determine the morality of actions or inactions. The fact that we can’t do that accurately in the real world isn’t important to Utilitarians usually, because the whole point is this is about the utility of our actions, and the greatest good for the greatest number is just a simple way to point our moral compass. It doesn’t need to be exact summing.
So, full disclosure, I considered myself a Utilitarian for many years. It seemed simple, effective and practical. Then I realized that I was handwaving on the whole “sum” issue. That I didn’t have any way to collect or consider the actual data around the impact of my actions, and the false assumptions I made could potentially lead me to bad choices. At best unknown amoral outcomes.
Utilitarianism is hard to argue against since it sounds so logical and reasonable. Even Spock in Star Trek espoused it with his “The needs of the many outweigh the needs of the few.”. It makes us feel good. It’s an easy out.
But should we base AI morality on the concept of the greatest good for the most people?
See, a human can’t truly know the long term impact of their decisions, because there are too many variables. But an AI built with Utilitarian intent will try to build models that allow these connections to be made. And while it may seem fair and democratic to try to have AI benefit the most people, I think this tenant is more of a way to justify making easy calculations about large groups of people, and placing people in buckets for classification.
Here’s why: data about biological entities is notoriously messy. What you had for lunch can impact your heart rate. Seeing an old friend might make you both happy and sad.
But to build models of our internal states, which I assume must be a goal if your goal is to
benefit and empower as many people as possible.
Then you are going to have to pick and choose what you record of our states, and try to make guesses about their meaning. And reducing a complex human to a model with even millions of variables is a gross simplification that will lead to massive mistakes in predicting what people want or need. Your models will be bad, your guesses worse, and when you try to “sum” those values your are going to make even bigger mistakes.
If, instead, what they mean is that they mean to empower people economically, I would say we have a far bigger problem. First, the economic system is terribly unfair to start with, so protecting the status quo is a non-starter. Right now our hyper-capitalism is the opposite of a practical system that protects its members for utility. We let people die if they don’t pay private companies for health insurance and they get something more complicated than a hangnail.
So you can’t with a straight face tell me these companies are trying to solve world inequality. Accenture? Please.
The Myth of AI Utilitarianism is that all this will somehow get worked out with sheer processing power or better neural network design. My guess is: it’s the wrong direction to go entirely with AI.
I certainly get where they are coming from, they are trying to signal that AI should not just benefit the rich, etc. But instead, what if their first tenant was:
1. We will seek to ensure that AI technologies benefit and empower every individual equally.
Now there is no magic sum to consider. We only focus on the health and well being of each person. And we avoid trying to make bad choices based on bad data. Because even if an AI tried to tell us that we should attack China with a nuke in order to save the most lives, we should not ever listen to that advice. Because we are not people, we are persons. We are Singletons. And if an AI isn’t designed to cater to us as individuals and is instead trying to fit us into a bigger commercial model, then its base motives will always be suspect.
The trolley problem for AI is solved by never letting an AI make that decision. Behind all decisions of our tools must stand a single human. No autonomous vehicles, for instance, without a single responsible human behind the wheel.
Trust No Robot.
Painting by Simon Stålenhag