Toward a Rebranded Categorical Imperative

Yep, that’s Kant and his big head.

Many of you are familiar with the Categorical Imperative.

According to Kant, sentient beings occupy a special place in creation, and morality can be summed up in an imperative, or ultimate commandment of reason, from which all duties and obligations derive. He defined an imperative as any proposition declaring a certain action (or inaction) to be necessary.

To Kant, there was only one possible Imperative that applied everywhere:

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.[1]

So what he did is take the Golden Rule, which states roughly to do to others what you would have them do to you, and make it about creating rules concerning how all people are treated at all times.

Kant gets bogged down above whether it was actually true and provable through argument. But I am only interested in whether the Categorical Imperative is useful or not.

And my conclusion is, it needs a rewrite. Rephrasing. Rebranding.

Also some simplification. Because most people don’t live life by making generalized rules and laws of nature, maybe we can simplify the metaphor some. Most people also don’t want to read impenetrable German Philosophy to understand how to live better lives.

What if instead of abstracted rules, the Categorical Imperative were instead just focused on what the immediate outcome of our action was. Instead of thinking about actions in terms of their impact if they were universal laws, what if instead we just focused on what their impact actually was at all.

People are terrible at predicting the outcome of their actions. We are bad in both the short term, and in the long term. Part of the problem is the world is so complicated that we cannot actually determine the impact of our actions.

But what if we had a person following us around all the time, watching our actions, noting our behaviors, providing us with feedback?

I know, that sounds a lot like a wife or a husband, or a therapist.

Instead of a person, maybe we have an a algorithm watch us and provide feedback?

The problem is, for this to work you would need AI that not only is as human as a human in terms of knowing what wants and needs are in our best interests, you would also need an AI that could model the entire world as well or better than we can. Otherwise you have built a dumb Oracle. The Greek kind, not the database kind.

So, for now, assuming we can’t all hire folks to watch us, and we can’t all get married, and building a pocket morality AI isn’t going to happen soon and should probably never happen, how do we get better at knowing what the effects of our actions our in the moral sense?

Oh, wait. We have ourselves.

Exercise: enter a contemplative state.

Consider a simple moral decision you have made in the last week. By moral I mean a decision that impacted other people.

Did it have the desired result you were seeking?

How do you know? Is that your gut talking or do you have evidence of any impact?

How will that answer change over time?

If we can answer these questions I think we can consider ways to build better moral models. Or, I should say, if we can’t answer them, then we are truly stuck.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.