Noun: meatbot (plural meatbots)
The problem and beauty of imperfection. My first encounter around the misconception of human logic came when I listened to an interview back in 1995 where they were asking a guy that bombed abortion clinics and his faulty logic. His logic was so fundamentally different than what I believed that calling our logic belief systems the same was like calling salt and pepper the same just because they sit next to each other on the kitchen table.
The human condition rarely or ever gives the ability to define a real control group. People are biological chemical engines that start different from the womb and are never programed the same way. Brothers and sisters have differences that are so profound that they end up fighting against each other in wars, radical different conclusions that come from as similar DNA, brains, educations or programming as possible.
In thinking about AI’s, the question arises, would two AI’s always come to the same conclusion given the same data? One would assume that AI’s could transfer data and input to other AI’s in a way that humans can’t as we are imperfect in our ability to transfer data or even how we perceive data. If an AI can share the data, wouldn’t they always draw the same conclusion, except for when hardware or computing power changed the ability to ‘think’ in different ways? Wouldn’t the more powerful computer always be the one that would ‘win’ a disagreement by sharing their data set and/or conclusion? Wouldn’t that program be ‘trusted’ over the one with lessor processing power?
What is the goal of AI or any machine? I would argue efficiency. Across every task we have assigned AI, even creative endeavors, the goal is to create better and faster. To compute and produce faster and use resources in the most efficient way possible. That is what we have expected out of every machine ever created in the history of man.
So when we think about decision trees that will always make the same decision, that systems with lessor computing power, overridden by the more powerful, there is only one logical path forward. We have to assume that at some point all human decisions would eventually be discounted like AI’s that reside on lessor hardware. In many ways it is happening already. The financial markets today are finance bots trading with finance bots. Very specific robots that will eventually not be directed by humans but by strategy that will come from more comprehensive AI.
Since the first humans, the room for disagreement has existed in the human biological computing imperfection. When there is only one ‘right’ decision of efficiency that exists, then we will be eventually directed by the most logical machine, spending our days executing a plan that we have no input into or understanding.. but just like the trading machines, our meatbot life will eventually be intended for a specific purpose only. To help the AI’s find efficiencies performing the lower level tasks. Or to being ‘entertained, fed or taken care of’.
A storm is coming. We race against time, expanding into environments that keeps us ahead of the coming storm is the only way to avoid a meatbot existence. Expansion outside the limited definitions of profit and efficiency must happen for us to find shelter from this storm. Back into exploration and expansion. Our human and AI goals must evolve.