Life isn’t all that great, if you’re a Statutory Ape



Play the song above while you read this to reveal its secret message!

A couple things this blog is NOT about, in case you got here via Bing.

First – this is NOT a blog about Julianne Malveaux calling President Trump an Orangutang.

“At the end of the day, the goal has to be to get rid of the orange orangutan.” 

-Julianne Malveaux

Julianne – just be better. You know the story about wrestling with pigs. You just get dirty and the pig likes it.

Second – this is NOT a blog about laws regulating the owning of Apes in various states.

Go here for that


Note, if your really into owning an ape, you can make your way to Alabama, Delaware, Idaho, Indiana, Kansas, Michigan, Mississippi, Nebraska, Nevada, New Mexico, North Carolina, North Dakota, Pennsylvania, Rhode Island, Tennessee, Texas, West Virginia, Wisconsin or Wyoming and have a backyard full of great apes!  Asshole.

No – this is a blog about Elon Musk and his dire warnings

Screen Shot 2019-09-22 at 1.12.05 PM

As reported here (and many other places), Elon Musk has determined that AI and computers are going to be so smart that we will look like monkeys next to them. When that happens, well… lets hear it in his own words:

“Can a chimpanzee really understand humans? Not really,” Musk said.

“We just seem like strange aliens. They mostly just care about other chimpanzees. And this will be how it is, more or less.”

“The intelligence gap [between Humans and AI] will probably be much, much greater [than between Apes and Humans], and said it could easily get to the point where things get out of humans’ control.”

“What do you do with a situation like that? I’m not sure. I hope they’re nice,”

“People underestimate the capability of AI.”

“They sort of think it’s like a smart human, but it’s going to be much more than that,”  “It’ll be much smarter than the smartest human.”

“The first thing you should assume is that we are very dumb, and that we can definitely make things smarter than ourselves,”

-Elon Musk

In other interviews he has noted that the “unfortunate result of the growing power of digital intelligence could lead to humanity being hoarded into small, zoo-like swaths of the globe, an existence that would more closely resemble what has happened to monkeys, which have been stripped of their natural habitat by vastly more intelligent primates”. 

So his points seem to be, AI is going to take over like it or not. “Hope they are nice”. Musk has started a new company (Neuralink) whose purpose is to augment humans with AI by creating a high bandwidth interface to the brain, so that Humans can remain on top as long as possible. I won’t (in this blog) go into that. Lets focus on his core arguments that he uses to convince himself, investors and everyone else that it is inevitable that AI is going become emergent, potentially vicious and an existential threat to humanity.

His argument seems to go like this:

Premise 1: “We are very dumb”

Premise 2: “We can definitely make things smarter than ourselves”

Conclusion 1: We can make a super intelligent AI

Premise 3: This super intelligent AI will surpass human intelligence i.e.- become emergent.

Premise 4: This emergent AI will either be benign or not benign.

Premise 5: If Benign – we become slaves in a zoo

Premise 6: If not Benign – uh – well I guess thats it. End of Humans.

Conclusion 2: “If we can’t beat them, we should join them”. So lets build something that will help us keep up. (IE – Neuralink).

Lets unpack these premises and conclusions to see if they stand up to fact and logic.

Premise 1: “We are very dumb”

This is hard to refute. Humans make lot of mistakes. Humans have done horrible things to themselves, the environment, other animal species and soon enough Space.

However – the underpinning of this presumption is interesting. Calling a person or entire race of people dumb, implies there is a singular type intelligence. That there is “Dumb” and “Not Dumb”. And that it is, at best, a spectrum. When Elon says “We are Dumb” he is assuming that there is a spectrum that everyone agrees with. IQ aside – there is basically dumber and dumb and smart.

Elon View of Intelligence

But is this true? Is there really one thing that we call intelligence and it can somehow be measured? Is it IQ? If not what?

In 1983 an American developmental psychologist Howard Gardener of Harvard described 9 types intelligence that humans exhibit. I won’t go into any great detail here but the 9 types are: 


As a bottom up observer of humans, Gardener discovered that there were people that were (for instance) really good at Math/Logic but very poor in Intra-personal. Each of the types of “Intelligence” were independent and separately measurable. More importantly, it suggests that the traditional notion of intelligence, based on I.Q. testing, is far too limited.

So would we expect AI to be smarter than Humans at all of these types of intelligence? Doubtful. It seems reasonable to assume that AI can get better at some of them. But all of them? And is this really an exclusive lists? I personally think that 9 types is underselling humans. At the very least however we can agree that the broad assumption that Elon seems to be making (that there is only one type of intelligence) is flawed. Thus premise 1 seems to be in question. If you want to learn more about Dr. Gardener’s theory I suggest you go here.

Premise 2: “We can definitely make things smarter than ourselves”

Humans don’t fly – but we can invent airplanes. Humans can only run so fast and so far – so we invent cars, trains and busses. Humans can only swim so far – so we invent boats and submarines. By analogy – humans can only be so smart – so we can invent something smarter than us.

But is this true? The claim is strongly based on what your mean by the word “smart”. We can certainly make computers that calculate faster than humans. We can make AI that can translate into multiple languages.

But can we make computers that know what they are calculating? Or that understand the languages that they are translating? What do we mean by “know” and “understand”?

This is specifically a issue addressed by Shalom Lappin, Professor of Computational Linguistics at the University of Gothenburg.

“The revolution in artificial intelligence that has arrived with the deep learning technology is still in its infancy, and even though it’s developing fast, I don’t think the idea of malicious super-agents is a real prospect that we have to worry about in the near future.”

-Shalom Lappin, Professor of Computational Linguistics, University of Gothenburg

Computers only reason about the task for which they are trained. They successfully find patterns and associations in millions of data, thus become better than humans in playing chess, translating and writing texts and perhaps one day, driving a car. But as soon as we leave a specific and defined area and jump to another they are lost.  General intelligence eludes them. And always will. More from Dr. Lappin:

“My personal feeling is that we will never quite get to the point where machines have something that resembles general reasoning power. But I could well be wrong”

-Shalom Lappin.

Its interesting to note his hesitation in making the blanket claim – and that is what I am sure the Elon apologist would do. That being said – his main point is that there no evidence that man can make general AI. There is ample evidence that man can make a computer do specific tasks better than humans – although even there – human translation is still much better than AI based which tends to get 80% of the text right. In a similar vein Richard Johansson, Associate Professor at the division of Data Science at Chalmers,  notes:

The Computers’ weakness is that they don’t really understand anything,

-Richard Johansson, Associate Professor at the division of Data Science at Chalmers.

Will AI improve. Of course. Will AI suddenly grow awareness – there is NO evidence of this. 

Thus Elon’s first conclusion – “We can make a super intelligent AI” seems, at best, in question. He has underestimated what Intelligence itself is as shown in premise 1 and has overestimated what humans can do in premise 2.

Premise 4: This emergent AI will either be benign or not benign.

This seems pretty self evident huh? The AI that (at this point questionably) become Aware would either be hostile to its creators or not – correct? It will either care for us like a loving mother – or destroy us like the terminator. We will have 2 choices in the former case and no choices in the latter.

But is this true? Aren’t there possible future worlds where an emergent AI cares for Humans in many ways but also destroys many things precious to humans? For instance – maybe a benign AI decides that all the Militaries in the world need to be ‘disabled’ for the long term health of the race. So it initiates nuclear blast to go off on all military bases worldwide at the same time. This of course results in significant loss of life and a horrible situation for those living in that world. But its certainly not simply AI-Mother vs. Terminator. Its more of a stupid decision by a computer that does not have any perspective. But again – all we need is one example of an AI that is neither benign or not-benign to invalidate this premise.

The other problem is with the word benign itself.  Lets examine this. There are 2 meanings to the word:

Screen Shot 2019-09-22 at 7.15.43 PM

Note that definition 1 is all about intention. Humans and animals have the capacity for intention. They can be kindly or not. Agreeable or not. Accommodating, Gracious or Considerate. But Computers can’t really be any of those.

Definition 2 is the Medical definition of Benign. It is all about outcomes. A disease can be benign or not. It just means that its not going to kill you. A computer can be benign as well. (Ever had a computer lock you out of your email after 3 tries to remember you password – they are not benign in that case).

Because of the way that human languages have evolved to be so flexible as allow us to communicate both very specific information as well as vast philosophical theories – its hopelessly broken as a true way to be anything other than be superficially precise. If you don’t believe me go read here and here.

When people talk about a computer program being benign – what they are REALLY saying is that it has good outcomes. The medical notion. People who are saying that computers can be “kind” are saying something that has no basis in the current world – and by many people estimation – in any future world. It may be the case that what we mean by “Kindly” is is just something that organic carbon based life can do.

Premise 5 and 6 and Conclusion 2 – nope.

Premise 5 and 6 are basically sub-elements of Premise 4 and thus we do not need to address them separately.

Thus as you can see – Conclusion 2 – ‘that if we can’t beat them we should join them’ – seems to be a hasty conclusion. Why should we spend money trying to put computers in our head if there is no existential threat?


Its clear that this argument for why we need to invest in augmentation to try to stay ahead of mythical “emergent” AI seems far fetched.

So does Elon have any other arguments for his case that we need to protect ourselves against the emerging AI onslaught? Yes – in his talk with Alibaba’s Jack Ma on a stage in Shanghai to debate AI and its implications for humanity the following exchange took place:

Ma encouraged people to have faith in humanity, our creativity, and the future. “I don’t think artificial intelligence is a threat,” he said, to which Musk replied, “I don’t know, man, that’s like, famous last words.”

So in this case – he seems to be arguing a form of Pascal’s Wager. I won’t go into any great detail  because for the most part Pascals wager has been debunked, but suffice it to say that there is great harm in pursuing false premises because they do not lead to truth – and in fact can lead to great harm. We would prefer to see Mr. Musk pursue goals that are more in line with a humanistic view of the world. Are there places for Neuralink? Of course. Extreme epileptics and wounded war victims are great examples of people that can benefit greatly from this type of technology. But selling us a vision of AI supercomputers taking over; and that being the driving need for this technology, seems intellectually dishonest.


The organization behind TrustNoRobot is now a non-profit! Our mission is to promote humans above technology. Our simple belief is that technology should work for humans and better their condition and never have a negative impact on humanity. Please find it in your heart to donate to this important cause.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.