Imagine All The (DeepFake) People

Imagine the year is 1989, and I put in front of you a computer with a mouse, keyboard, and wired connection to, if not the Internet, CompuServe or one of the other graphical interfaces to remote machines and information.

Now, if I asked you to project forward twenty years, could you have imagined all that boiled down and expanded 100x in your pocket, connected wirelessly to billions of remote services? Some people have that kind of imagination for sure, but most people would have struggled to see how it would be possible. Or necessary.

Now I ask you to consider these technologies:

1) Realtime face mapping to simulate another person, as seen here.

2) Realtime voice cloning that matches your own voice, as seen on sites like

3) Enough processing power to enable real-time near-perfect simulation of a human being.

All three of these things exist today, and we are already seeing the emergence of DeepFakes as a service with sites like Sure, it’s not anywhere near passable as a human yet, but neither was your Apple IIe as a mobile device.

And here is the kicker: it’s not going to take 20 years. It’s going to happen over the next two years. By 2021 you will not be able to trust the authenticity of any given talking head video unless it’s certified in some way.

We don’t need new laws to protect us from this forced-Turing test, existing fraud and copyright laws should suffice. We need new technology. And not tech to identify a fake once it’s out in the world, already damaging our identities, but rather tech to certify the authenticity of a video from its inception.

Digital watermarking will be an important component of this new technology, but that’s like saying RAM is an important component of your mobile device. We will need to authenticate our devices and use Distributed Human Identity Tests to validate ourselves.

More about some research on AI-based digital watermarking research:

It replaces the typical photo development pipeline with a neural network—one form of AI—that introduces carefully crafted artifacts directly into the image at the moment of image acquisition. These artifacts, akin to “digital watermarks,” are extremely sensitive to manipulation.

In tests, the prototype imaging pipeline increased the chances of detecting manipulation of images and video from approximately 45 percent to over 90 percent without sacrificing image quality.

It’s a start, but will need to be created into consumer tech rapidly to afford us any protection from the chaos that is about to occur.

DeepFakes are going to be a lethal new weapon dropped into the ongoing ideological, class, and epistemological battles being waged in our society. The level of disruption they will cause is hard to imagine, but start by thinking about a perfect video of you saying the opposite of your current political beliefs posted on social networks.

Identifying bots is almost impossible now. When they begin to hijack real identities, or fabricate them whole cloth, the game is over for a trusting society.

Astounding Science Fiction Oct 1953 cover art

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.