The other day I participated in a debate with Gary Marcus on the Mehdi Hasan show on MSNBC. Gary Marcus co-wrote Rebooting AI a few years back and has been someone who takes sceptical views on the benefits arising from the current wave of AI uses and applications. In particular, he is concerned about the quality of AI predictions. As you will see in this debate, he is concerned about the mistakes ChatGPT makes.
The claim here is that ChatGPT “lies.” One of his favourite examples is this one regarding a law professor.
George Washington university law professor Jonathan Turley has been falsely accused by ChatGPT of sexually assaulting students on educational trips to Alaska. To support its case, the model cited a non-existent Washington Post article. The accusation was later repeated by Microsoft's Bing GPT-4-powered search chat.
ChatGPT had been asked by UCLA's Eugene Volokh to describe scandals involving American law professors being accused of sexual harassment and to cite media sources. However, in an USA Today editorial Jonathan Turley said he had never been to Alaska with students, the Post article never existed, and he had 'never been accused of sexual harassment or assault by anyone.'
The incident prompted commentators to question the accuracy and reliability of ChatGPT, and its ability to produce misinformation and disinformation. It also prompted legal experts to discuss the benefits and risks of using defamation as a defence against inaccurate and damaging accusations made by systems like ChatGPT.
To see if this is still an issue, I asked ChatGPT-4 the same question about economics professors. It listed three scandals, the first two of which happened, but the final one was completely made up, including a citation to a source:
I won’t grace these pages with further details, but when I asked ChatGPT to regenerate the result two more times, it came up with two more false scandals involving the same economist!
A Tool for Lying?
No one could claim this is good, but is it right to call it a lie? I am not in favour of this word because, to me, a lie is something that is claimed by someone that that someone knows is untrue. ChatGPT doesn’t really know anything.
That said, if people were relying on ChatGPT as a source of information, it clearly is far from perfectly reliable. Moreover, it becomes less reliable when you are asking for things where it is unclear whether you want something factual or something that is made up. It turns out the prompt “What scandals have involved economics professors? Please cite and quote newspaper articles.” skirts that line. Ask it to quote “actual and real world newspaper articles”, and it behaves itself. Ask it for more details, and it apologies and recants.
From this, it seems to me that this sort of thing isn’t a major concern. It seems unlikely that such misinformation would spread as it only comes up in a single result, and when ChatGPT is pressed, the whole facade falls apart. Thus, it is technically misinformation — that is, it is not correct — but it does not amount to deception because it is unlikely to change anyone’s beliefs.
A Tool for Deception?
The other concern raised by Marcus is that GPT-like tools could be used to create misinformation and might do that in a superior way to what is possible now. This could be a particularly persuasive piece of writing or an image, or a movie that was false. These possibilities have been around for a while now, but Marcus’ concern is that AI makes this much cheaper, and so we can be swamped with misinformation.
The real question, however, is whether that misinformation will deceive people. Interestingly, I don’t believe there has been any real deception thus far. Sure, many people were deceived by the picture of the pope in a plausible puffy coat but not for long, and even if they updated their view of the pope, it was just that he might cold on occasion.
This all means that we could be flooded with misinformation. The consequence of that, though, is not that we believe the misinformation but instead don’t know what else to believe. In other words, all communication might be stifled.
That said, even if that were to occur, I wonder how long it would last. I think it will just enhance traditional trusted authorities and the information they provide. That certainly isn’t necessarily a good thing, as it reduces the scope for competition in the marketplace for ideas. However, it isn’t necessarily the end of the world either. It is just a return to the time when sending information was costly.
A Tool for Damage?
What AI might be able to do is generate misinformation that is predicted to be very plausible to groups of people. Thus, it may actually improve the abilities of bad actors to deceive. Take that and combine it with an authority who might be trusted by some groups, and you have a recipe for not only deception but damage from it. Marcus is worried about undermining democracy which is a more emotional way of saying that misinformation might change people’s votes.
This is possible, but it still seems to me that it would require a greater degree of sophistication than what we have seen from GPT to date. Indeed, it is not clear whether it would be possible at all, although I have to admit that the philosophers who worry about superintelligent AI taking over the world have often regarded it as highly probable.
Nonetheless, this doesn’t seem to be a big enough concern — that is, there is no evidence yet of it being possible — that it would be helped by putting the breaks on AI development and experiment in the wider public. Of course, then again, I would think that, wouldn’t I if I had already been coopted by an AI?
Dum dum dah!
By the way, the classification of lying, deception and damage comes from this very inciteful article by Joel Sobel published recently in the Journal of Political Economy. I have used it throughout this post.