Perhaps the biggest immediate concern about the new generative AI models and, in particular, their widespread availability is that they will create large opportunities for bad actors to engage in misinformation. While misinformation is not new, what AI allows people to do is deepen the evidentiary trail of misinformation. Rather than stating that Joe Biden said something bad, AI allows a video or audio of that to be created. If you didn’t know AI could do that, such evidence might convince you of something, whereas previously, you may have been sceptical with just “hearsay.”
As regular readers know, I am on the optimistic side regarding what AI is going to do for us. However, I think it is pretty much inevitable that AI-generated misinformation will flood the next election cycle. The big question, however, is how much it will matter. While a ton of psychology goes into what is believed and what is not, I will speculate on the outcomes here using my economist’s lens. It’s all that I know.
Let’s start with some presumptions:
There will be AI-generated “deep fakes” of election candidates that will be indistinguishable to ordinary people from truthful content. There will likely be a lot of fake stuff.
Some organisations will emerge who will have the technical capabilities to identify deep fakes, and they will be able to, if they choose, only promote truthful information.
The first presumption does not imply that people will just end up believing anything. More likely is that they will end up believing nothing. But it is more nuanced than that. Scepticism is a function of surprise. Therefore, the further misinformation is away from our priors on what we expect, the more likely we are to dismiss it as fake. Taken to its logical conclusion, this means that we won’t actually end up being surprised because if there was the potential for that, we would doubt the evidence is real.
Note that this does not mean that people won’t believe or push fake content. I would be surprised if Joe Biden came out and said he has decided to attack Mexico over drugs, but I wouldn’t be surprised if Donald Trump said that. So if the Trump thing is fake, I might still believe it. The good news is that such stuff is unlikely to be consequential. For people who have “already made up their minds” fake surprises won’t change their minds in one way or another. More critically, this means that there will be fewer opportunities for surprises at all to change people’s minds because if there was scope for that, the prevalence of AI-generated misinformation will nullify that.
This leads to a secondary and, in my opinion, more insidious effect from AI. If I were a purveyor of misinformation trying to influence election outcomes I would not try to create surprising content but more subtle changes. For instance, if the perception of Joe Biden is that he is old, I would create content that makes him look a little less old. I wouldn’t push it too far but just enough to cause people to downrate that dimension as a criteria for voting. For Donald Trump, I am not sure what that subtle content would look like (he is not the subtle type) but it may be changes in his emotion and expression when interacting with ordinary people or perhaps something that makes him seem like he might have a sense of humour. My prediction is that we will see this sort of stuff or more to the point, we won’t “see” or notice it.
What of the second presumption? This is where I have some hope although not much of it. If you are an organisation who can credibly identify fake content and also can certify truthful content, you have a chance to build a reputation for trustworthiness in this environment where no one knows what to really believe. Every other time we have had technological disruptions to news and information, trust organisations have emerged. This was documented by Jill Lepore in her fantastic and under-rated book, These Truths. This will happen again.
The problem, of course, is that we have polarisation in news sources already and so it is likely that my trusted news organisation will not be the trusted organisations of those with different beliefs about the candidates than myself. That means that it is not clear that the trusted organisations will be trusted by those who most need them: people whose voting preferences are capable of being swayed.
In summary, will 2024 be chaotic? Yes. Will it look like that elections can be influenced? Superficially. However, in reality, the ability of bad actors to themselves control beliefs even in a mess of misinformation is limited precisely because there is a mess of misinformation that makes people more sceptical. It is like a natural immunity response to this stuff. We have to worry more about influence through misinformation when we are in a world where most information is reliable. That just happens not to be our current world.