The Easy Out of AI Utilitarianism

At first glance what the Partnership on AI is trying to do is entirely Noble. They have many important players as partners, including the Google Deepmind team, the ACLU, academics, Accenture, Salesforce, NGOs, the works. They clearly want to facilitate deep discussion and consideration around AI and understand what its social disruption might entail. But…I am going to poke a little at the first tenant of the organization:

https://www.partnershiponai.org/tenets/

1. We will seek to ensure that AI technologies benefit and empower as many people as possible.

My issue here is the clause “as many people as possible.” I think this is just a modern PR rephrasing of Utilitarianism.

Utilitarianism is an ethical and philosophical theory that states that the best action is the one that maximizes utility, which is usually defined as that which produces the greatest well-being of the greatest number of people, and in some cases, sentient animals.

The problem here is the Bentham and others think that the sum is the important thing, that we can just “add up” experience to determine the morality of actions or inactions. The fact that we can’t do that accurately in the real world isn’t important to Utilitarians usually, because the whole point is this is about the utility of our actions, and the greatest good for the greatest number is just a simple way to point our moral compass. It doesn’t need to be exact summing.

So, full disclosure, I considered myself a Utilitarian for many years. It seemed simple, effective and practical. Then I realized that I was handwaving on the whole “sum” issue. That I didn’t have any way to collect or consider the actual data around the impact of my actions, and the false assumptions I made could potentially lead me to bad choices. At best unknown amoral outcomes.

Utilitarianism is hard to argue against since it sounds so logical and reasonable. Even Spock in Star Trek espoused it with his “The needs of the many outweigh the needs of the few.”. It makes us feel good. It’s an easy out.

But should we base AI morality on the concept of the greatest good for the most people?

See, a human can’t truly know the long term impact of their decisions, because there are too many variables. But an AI built with Utilitarian intent will try to build models that allow these connections to be made. And while it may seem fair and democratic to try to have AI benefit the most people, I think this tenant is more of a way to justify making easy calculations about large groups of people, and placing people in buckets for classification.

Here’s why: data about biological entities is notoriously messy. What you had for lunch can impact your heart rate. Seeing an old friend might make you both happy and sad.

But to build models of our internal states, which I assume must be a goal if your goal is to

benefit and empower as many people as possible.

Then you are going to have to pick and choose what you record of our states, and try to make guesses about their meaning. And reducing a complex human to a model with even millions of variables is a gross simplification that will lead to massive mistakes in predicting what people want or need. Your models will be bad, your guesses worse, and when you try to “sum” those values your are going to make even bigger mistakes.

If, instead, what they mean is that they mean to empower people economically, I would say we have a far bigger problem. First, the economic system is terribly unfair to start with, so protecting the status quo is a non-starter. Right now our hyper-capitalism is the opposite of a practical system that protects its members for utility. We let people die if they don’t pay private companies for health insurance and they get something more complicated than a hangnail.

So you can’t with a straight face tell me these companies are trying to solve world inequality. Accenture? Please.

The Myth of AI Utilitarianism is that all this will somehow get worked out with sheer processing power or better neural network design. My guess is: it’s the wrong direction to go entirely with AI.

I certainly get where they are coming from, they are trying to signal that AI should not just benefit the rich, etc. But instead, what if their first tenant was:

1. We will seek to ensure that AI technologies benefit and empower every individual equally.

Now there is no magic sum to consider. We only focus on the health and well being of each person. And we avoid trying to make bad choices based on bad data. Because even if an AI tried to tell us that we should attack China with a nuke in order to save the most lives, we should not ever listen to that advice. Because we are not people, we are persons. We are Singletons. And if an AI isn’t designed to cater to us as individuals and is instead trying to fit us into a bigger commercial model, then its base motives will always be suspect.

The trolley problem for AI is solved by never letting an AI make that decision. Behind all decisions of our tools must stand a single human. No autonomous vehicles, for instance, without a single responsible human behind the wheel.

Trust No Robot.

Painting by Simon Stålenhag

3 comments

  1. Hardly any technology has ever satisfied your equal benefit requirement. Not antibiotics, nor semiconductors nor the internet.

    There is a terrible asymmetry between evident and possible consequences.

    When high speed internet became generally available, its benefits were impossible to predict whereas its consequences, lost livelihoods and lost privacy, were foreseen. How much poorer would the world be today is we had allowed our fears to trump our curiosity?

    When Trump imposed steel tariffs, the few thousand jobs he saved were reasonably evident. But the millions of jobs that were imperilled due to increased costs and lost competitiveness were abstract, thus enabling measures that had small evident benefits despite massive, merely possible, costs.

    The forces that push humanity forward tend to be massive UNforeseeable consequences that derive from technologies and policies that increase human freedom, such as free trade, freedom of movement, and the internet. They have propelled us into an era of unparalleled prosperity for all. 20 years ago the extreme poverty rate was between 28 and 29% today it is between 8 and 9%. One in five humans (arguably more) no longer live in extreme poverty thanks to the unforeseen consequences of the possible.

    This partnership’s tenets aren’t meant to be prescriptive to the point of implying a model of each and every living thing. They are meant as an acknowledgement that things can (and surely will) go wrong as we unleash the power of this new possibility: A world in which everything is linked, information is shared, and humans have the _choice_ to participate, and to use autonomous agents to make their lives better. It is because we have this choice, and because we have thought in advance about what _can_ go wrong, that we can be supremely confident that this new world of possibility will be at least as enriching as the previous one were.

    • Wow, this should be a blog post on it’s own. I think thinking in advance about what can wrong is what we should be doing, and I think my overall point is that trusting that capitalism on it’s own, without consideration of the singular importance of our humanity, is repeating mistakes of the past.

      Seriously Jason, if you’d like to write for the blog I would love to have your insights shared!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.