ThoughtMail and Meet in the Middle AI


three research teams recently made progress in turning data from electrodes surgically placed on the brain into computer-generated speech. Using computational models known as neural networks, they reconstructed words and sentences that were, in some cases, intelligible to human listeners.

The article above summarizes three separate experiments that show progress in using machine learning to process the thoughts about speech and turn it into understandable sounds. It’s not tech that is going to roll out tomorrow or even five years from now, but within our lifetimes certainly.

Decoding imagined speech will require “a huge jump,” says Gerwin Schalk, a neuroengineer at the National Center for Adaptive Neurotechnologies at the New York State Department of Health in Albany. “It’s really unclear how to do that at all.”

One approach, Herff says, might be to give feedback to the user of the brain-computer interface: If they can hear the computer’s speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.

When you start thinking of the model of you that can be built given enough data points — all that you say, all that you do, all that you touch, etc — you begin to see how the digital you simply cannot be something we allow companies to own.

I am not only talking about the horrors of denying people insurance because of their predicted mortality. Or the fresh hell of advertising created specifically for us as individuals.

The model of you will contain all the contradictions and darkness that being human entails. Because the human brain consists of many subsystems, the conscious brain being just one with the brightest spotlight beam. But if Company X or Government Y owns that true internal state model of you, you are hosed.

The only entity who should be able to query the internal state model of you is you.

Companies and Governments should be allowed to create models of us based on our external actions and words. We can’t stop them even if we tried. But the real action of the brain is ineffable. Words are the magic trick that makes the economic world turn, but when I look at an epic ocean sunset I am not thinking about the words “water” or “sunset”.

Our internal states can be described with words, but they are not words themselves.

What really captured my thoughts on this though is the idea of “meeting in the middle” with an external neural network. I think this accurately describes the only way to build a valid model of self.

And when we have these external models, they will become our spam filters, our Thought Mail systems, sending and receiving on our behalf.

Because we don’t need more signals, we need fewer. A phone that is smart enough to know I am watching a sunset and to not bother me with an unsolicited thought message from my boss is truly beautiful.

These “meet in the middle” neural networks could become the guardian angels of our Singletons, and not the deamons of the present that generate language in social media to influence our votes and purchases.

But only if we tell technology developers this is what we want, and support companies and individuals driving toward this end.

One comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.