I would never seriously propose a law. Only lawyers are allowed to do that from my understanding. Nor do I believe the idea I am proposing below should become an actual law.
I ask just that you consider a world where such a law could exist, and what its shape would be.
Here’s the little thought experiment. What if we passed a law or set a social norm that said that any website that didn’t validate identity had to have the following disclaimer visible on the front page:
This site does not validate identity of content creators or statements as factual.
Any content or promotions of content on this site may not be generated by humans, or be in your best interest to consume.
A simple warning label. Just like cigarettes, alcohol, and rock albums from the 80s with explicit lyrics about Tipper Gore.
But why? Why take the trouble to write something that probably won’t change behavior much?
Because bad bots may currently represent over 20% of all traffic on the Internet.
Because we don’t know what News or legitimate opinion is anymore. We don’t have any agreed upon definition of community shared facts, and in this unregulated space we are flooded with harmful data masquerading as fact. Promoted into our limited attention space by algorithms targeting us with organized disinformation campaigns.
Science can’t put on its peer-reviewed pants before lies travel fifty times through our consciousness.
But we can define what legitimate news and legitimate opinion isn’t. Because whether something is true or not, using bots to promote it should be given the same stigma as advertising cigarettes to children.
If all sites had to have the bot disclaimer until they were given accreditation, perhaps granted by a machine learning approach that gives the probability that the content on the site was generated by humans, then we would have a built-in method of detecting dangerous places.
Because 4chan and Reddit and (insert your most hated mainstream media site here) and Twitter etc should be allowed to exist, unregulated mostly, but not be given the same credibility as even a single confirmed human opinion. We need freedom of speech to be reserved for actual humans.
To be clear, I believe sites that allow anonymous posting and commentary should be protected and encouraged. But as part of the greater dialectic, as part of the search toward truth and understanding. Thinking these sites represent raw truth is potentially deadly for the untrained mind.
Ideas we consume may harm us, the laws should require that we are informed of this potential harm. Because if Government can’t protect us, it has no legitimacy at all.
Technically speaking, the law could be enforced by the browsers – in other words by default a browser would have to show this text unless a valid token were provided by the site or post. Browsers already do this kind of warning for sites that don’t use encryption (https). We could even build confirmed validation of legitimate human generated content into simple content management systems like this WordPress blog.
Would trustnorobot.blog need this disclaimer? No, because content creators here are all humans, and I do not artificially inflate or promote my posts.
This is all my opinion, and I am Chas Mastin. And saying that matters. And expressing opinions and sharing news is what we need to protect on the web. But the ML generated and bot upvoted flood of lies has overwhelmed the voices of the real.
Bots also break legitimate advertising, but let’s save that for another post.
The deaths and pain caused by bot-propogated disinformation have just begun. It is time to use our words, our laws, and machine learning to defend ourselves and fight back. Bots take away your power as a real human to have meaningful interactions on the web. Grab the power back.