There’s a new term going about, “AI slop.” AI slop is basically content that is created with minimal human effort by AI. It was the topic of a recent Last Week Tonight with John Oliver.
Somewhat uncharacteristically, Oliver offered no solutions to the problems arising from AI slop but simply lamented its continuing presence.
And there is a lot to lament about. While many consider the most serious issue to be disinformation (fooling some people), what is actually the critical issue is that disinformation drives out information in the sense that people do not trust anything to be informative at all. That is, you can’t inform any people any of the time.
This is not a new problem. Spam emails led to the same spiral as spam phone calls today. When we have a new technology that lowers the cost of communication (and AI does that), that equally lowers the cost of both information and disinformation, but the impacts are asymmetric. This is because costly communication tends to select for information over disinformation by making the return to widespread disinformation too high. Therefore, when costs fall, it unlocks more potential for disinformation than for information.
Herein lies the tension. We actually like reduced costs. In this case, being able to more cheaply create certain types of content — images, movies, music — opens up the creative process to more people. It is entry-promoting, and I have little sympathy for current creators who lament that competition.
But, at the same time, attention channels are limited. So, absent some sort of filtering mechanism, those channels get flooded with both good and bad content. And because bad content is disproportionately favoured by the reduced entry barriers (as the previous entry barriers were disproportionately keeping them out), we have no constraints on attention being filled up more or less randomly. When it comes to information, filtering is hard precisely because it is information, and you don’t know whether you want it or not until after you have it and sometimes not even then.
So, how do you resolve the tension between promoting entry and avoiding adverse selection? The answer is that someone (or something) needs to stand between the content generator and the user and sort the wheat from the chaff. With email, users deployed technology (provided by email providers) to filter email. It isn’t perfect, but these are now good enough that email has been cleaned up. The same thing is being tried with phone calls with less success thus far. This is a process whereby content is moderated.
The other way is for the user to more carefully select channels by which they receive information. This is how people dealt with the centuries-old problem of fake news, as Jill Lepore covered in her monumental work, These Truths: A History of the United States. Users identified outlets they could trust and paid attention to only what was coming from those outlets. Thus, they outsourced the filtering job, not to technology, but to other people who built a reputation and a business on that basis. This is not something that exists just for the news. These institutions and businesses are everywhere, where consumers need reliable information on product quality before making a purchase.
With AI slop, the challenge will likely be that the technological filtering of all information to parse it before reaching users is going to be subverted by the quality of AI itself. This race has happened before, but in this case, it seems particularly perilous.
That leaves trusted institutions. However, even here, AI presents a challenge because one of the things it could easily do is mimic a trusted institution or source. That means we now face a real challenge of identifying in a reliable and robust way, the provenance of any information we are receiving; that is, we need to know that it originated with the trusted agent that we are relying upon and not something else.
Consider images. Is the image you are looking at coming from the source you believed it to be? What you would need to do is quickly check the provenance of an image in some database that you knew to be from the source. That is actually not that hard to do using what we now loosely term ‘blockchain technology.’ (Yes, there I said it, but it could have an actual use, and this is it, so keep reading!) What the original content provider would do is take a hash of any image they distributed. This can uniquely determine whether an image you have received has been altered, as an altered image would have a different hash. Then you could compare hashes. Here is a simple website that allows this functionality for any digital file. You would, of course, have to have a way of checking the hashes of anything distributed by the original provider, but that could easily be achieved by having that provider enter the hash on an immutable database (aka blockchain) that they have signed.
To me, this seems inevitable and also doable. At the core is identity verification. Note that doesn’t mean you can’t have AI content. Instead, what it means is that you can find ways to be able to allow people to build reputations for information in a world flooded by disinformation.