We don’t need arguments about right or wrong, we need systems that solve problems.
The difficulty of creating a calculus of social order is twofold: we don’t know how to turn our interactions into math, and we don’t know how to weigh the value of various outcomes of said calculus. This is true both in small ways, such as where people should sit in an office, and in big ways, like what our immigration policy should be.
So how do we proceed? With another ten thousand years of ass and grass and beer? Or do we try to build something a little more sophisticated?
One thing we will need is to determine how to value individual needs and joy against the needs and joys of others. Despite thousands of years of philosophical musing on this issue, we are no closer to a simple equation of human happiness. Maybe because nobody tried to build one, instead opting to sway others to general principles of utility or deontology.
But with machine learning it seems to me there is a path to creating a different kind of approach. Let’s imagine I train a model of me, so it can not only identity me but also is my API, and if you want to know about what my needs are you can query that model.
I think with models of self like this, we can look at both “where you sit in an office” and “immigration policy” and see that both dilemmas could be solved.
So the thing about an office is, some people really don’t care where they sit, and some people really do. So a seating algorithm should be able to query employee models and determine what these needs are, while also weighing the importance of group dynamics and impact of certain individuals. For instance, some folks are just loud, and other folks can’t stand loud people. While humans doing a seating chart might be able to take a few of these characteristics into account, an ML system could weigh thousands or millions of factors to determine optimal seating.
Some people don’t like immigrants, people from other cultures, and some people love them. I am in the latter category, which is one of the main reason I choose NYC as my home. But being wary of immigrants isn’t an indication of an underlying sociopathy (necessarily). It may just be a desire to maintain an understandable comfort zone around you and your kin and culture.
Instead of strict or lax immigration laws, if we had models of self we could query, perhaps we could have town level immigration policies.
I know this sounds dystopian, that maybe someday there will be towns or neighborhoods that would reject the other because of their culture or place or origin. But, if it opens up the world to greater and more fair interactions overall, maybe it’s a better world. There are many towns in the US that need immigrants to survive. With models of self, your AI could guide you to the right kind of community.
If the insular communities become more insular, and the open communities become more open, the open communities will ultimately win. It might take ten thousand years, or two hundred uear, but when we get there, it will be through the evolution of our society guided by our AI-advocated needs, and not another shot in the dark with laws and presidential decrees.
Image of D&D Players Handbook, which lets people create a simple model and game with it.