A reader writes:
So can automation cause de-skilling to handle complex situation? If routine problems are dealt with by the algorithm then how do humans develop the skills to handle catastrophic problems? I’m thinking autopilot, self driving cars, call centers. If the answer to complexity in the system is plugging humans in the top end to close the gaps I’m not sure how to get skill set without experience?
I believe human de-skilling is a real thing, and it happens not only with AI but also with all technology. The development of cars reduced the amount of walking and horse riding people do, for instance.
But there is also a paradox that emerges: while fewer people walk great distances, more people run marathons for enjoyment than they used to. While spreadsheets have replaced the skills of bookkeeping for many, they have opened up those skills to a far greater number. Tools like TurboTax have reduced the number of skilled accountants, but make those that still exist more valued in certain areas. Hunting skills lost are made up for by more nuturious diets for greater numbers of people, etc.
To be clear, I am not making an economic argument. Paying jobs that are lost to robots are likely gone for good. What I am suggesting is the adaptability of humans makes us quickly pivot from lost skills to new ones. Adaptation is our best, and maybe only good, feature.
However, the reader makes an excellent point about dealing with catastrophic problems. If we only have medical robots that care for us and they are hacked or break down, tragedy is sure to follow.
Which is why I keep emphasizing in this blog that we must never give ethical choices to machines. There must always be a trained doctor working with the surgery robot. There must always be a human behind the wheel of an autonomous vehicle, who will likely choose to drive on their own at times. AI must always be a tool, and never seen as a being, no matter how much people want to marry their waifus or venerate Elon Musk.
Does this mean I am suggesting that we cripple our technology? That we stop building robots and programs when they cross the threshold of moral action?
That is exactly what I am suggesting. We should use the technology of Laws to limit AI to being the tools of individuals, with always a single individual to blame for misuse. Because de-skilling alone we can handle. We will adapt, and maybe even steal back the skills when we tire of the machines. It would not surprise me at all if there were more family farms in 100 years. But if we let AI be our bosses in any way, we lose for all time our purpose, which goes far beyond living our lives in fulfilling ways.
Our purpose is much more grand. We are a biological expression of the universe itself. We are its great transparent eyeball. And there is a dystopian future to prevent, while reclaiming our paradise lost. Work to be done, a world to unite, space to claim for life. A world to save from climate change, families to grow, a solar system to colonize.
The skills we lose will be nothing compared to what we will gain if we classify machine intelligence correctly, as a tool. We must stop trying to build new life, and instead empower the individuals of our planet equally.
And we must make the falsification of human being with bots a clear crime. It’s fraud of the highest order. Even if it helped get the person you wanted elected.
Because of all the manufactured ideas of the modern world: justice, freedom, happiness, etc, the one that means the most for our future is equality. And that starts by defining AI as not equal to us, to prevent the moral confusion that follows.
Painting by Simon Stålenhag