The Confusion

brian-from-confusedcom-toy-robot-_57.jpgMost human challenges are caused by language category errors. These occur when we define a thing or idea in the wrong category.

Words themselves create fuzzy boundaries for our classification categories. So, as long as we use words, we are in constant danger of making classification errors.

Examples: Science classified as truth, while it’s more of a process used to discover truth. Religion classified as truth, when it’s closer to a process for social control and throttling change.

Robots classified as Beings.

This is the most dangerous classification error facing our species.

To be clear for the purpose of this discussion I will be considering a robot as a physical tool controlled by a computer system. The computer system and physical form of the robot may confuse us into thinking they are Beings.

This confusion is the heart of the threat we face.

To understand why robots (and by robots here I mean all computer programs) cannot and should not be considered Beings, we must understand what it means to be human, how machine learning works, and some of the challenges our species faces in the coming decades and centuries. I will be presenting models and arguments and exercises to assist in this journey.

This will take some time.

If you are in favor of robots having consideration as Beings some day, presumably because you are a fan of schlocky sci-fi, I will also propose a path toward that end.

Exercise: enter a contemplative state.

Imagine a simple non-computer based tool in your home. It could be a broom, a kitchen utensil, shampoo, anything without power or microchips in it.

Think of how thinking about that tool creates emotions and thoughts within you. Happiness, satisfaction, frustration. Imagine using the tool in various senarios.

Now imagine what is going on inside the tool while it is being used.

Now think of a human you are close to. Imagine interactions with them and the feelings those generate.

Does this person have a model of you inside their brain that also creates emotional responses for them?


  1. […] The UX folks at Google seem to think the goal is to not have to adapt to a machine. Not realizing that that our adaptation is what defines something as a tool or a person. Otherwise you create potential moral confusion. […]

Comments are closed.