Do we like to read what AI writes?
Another in the series of "it's really unlikely this is going to destroy everything"
In yesterday’s newsletter, I ran a little experiment. Instead of writing about one of my own papers myself, I got an LLM (specifically, Claude) to do it. I’m going to guess that most of my regular readers saw through that immediately and didn’t read the post. For me, the lack of usual reaction was a ‘tell.
But it is interesting to think about why that is. First of all, the post was written in the style of Malcolm Gladwell and was amusing to me, at least. Second, it was actually a pretty decent summary of the quite technical paper I had written. Indeed, I don’t think I could have improved on it. So, on paper at least (see, you know, this isn’t being written by AI, given the pun quality), this should have been a post that my subscribers would want to read.
This runs counter to a point I have been making in talks about ChatGPT and so on. In those talks, I commented on the fact that when ChatGPT came out, many journalists rang me in a panic to say that they were worried they were out of a job. Similar concerns have now been raised about musicians, authors, moviemakers, and other artists. But my line had been to assure them that what they did that was really valuable was still there. ChatGPT couldn’t replace it.
The argument goes like this. Previously, you used to do a task called “write a letter.” Then ChatGPT came along, and you could get it to write a letter in seconds. Didn’t that mean that no one would employ you to write a letter?
The answer was no. The diagram below explains what really happened. ChatGPT could now do the “writing” part of writing a letter and do it very quickly and very well. But you still had to tell it what to write AND you had to sign off on the letter. Those were actually the things that required real skill and judgment. The rest was stuff that just took time. So people were conflating: “AI can do the stuff that is time consuming and that I hate doing and so is what I’m paid to do” with “AI can do my job.”
When you think about it, writing a letter has always been those three steps; it is just that you didn’t think about it. Often, new technology unbundles workflows and then automates some parts. But it is quite rare that it can literally do everything. So everyone can just take a breath.
My little experiment yesterday, however, runs counter to my argument here. I did the bookends of the tasks, including the prompt and the core idea, and also decided to send it to my readers. But something was off. I even tried to get it to be written in the style of Joshua Gans but decided that it wasn’t nearly as good as the Malcolm Gladwell version.
Part of this is the “Princess Leia in Rouge One” issue. Knowing it is not the person triggers something in us that makes us suspicious. This is one of the reasons why some people think that when AI takes our jobs, there will always be some jobs that people can only do because other people will only value those things when people do them rather than machines. A kind of artisanal approach. I have to admit, I think that argument is crazy on its face, and also, I can’t imagine there are enough of those jobs to run the economy, but that is a whole other matter.
This argument also doesn’t gel for automation in general. I recall a David Letterman show in the early 1990s when Boeing came out with its 777. There’s no video I can show you, but he was commenting on the fact that it was the first aeroplane to be fully designed by computer. I remember him saying something like, “what were they doing before?” and then drawing a picture of a plane and saying, “here you go, make a hundred of these.” Letterman’s point being that, even before the 1990s, the idea of a plane not being designed by computer seemed dangerous. And I am betting we would all be more assured if planes were being assembled fully automatically today — which they aren’t. It’s still artisan work.
My newsletter, however, is not plane manufacturing. Instead, it is communication. But LLMs make that quite funny. Here is one of my favourite cartoons on this point.
This suggests that for much communication, the form of communication is potentially a waste of time, and now LLMs expose that for the farce it is. In this respect, whatever LLMs are doing is like plane manufacturing. It seems we never wanted people to spend their time crafting letters in the first place.
But not all writing. Some are supposed to be read, presumably, like this newsletter. What is going on there?
We already have a clue. You can often be reading a post and see an image and tell it is AI-generated. I tend to pay no attention to those images and skip past them. Similarly, when OpenAI launched its movie-making tool and showed some examples, technically, it was great, but I have to say that the movies weren’t great and knowing that they were AI-generated made me less willing to pay attention to them beyond the technical achievement.
There is a sense in which “proof of work” is a critical element in writing that makes people more likely to read what is written. This is an old notion in economics. When economists studied advertising, they wondered why people would be influenced by ads at all. Then Paul Milgrom and John Roberts suggested that the ads were a signal that the sellers thought they were doing to be able to sell lots of stuff. And the more the ads looked like they cost a ton of money to create, the more likely that signal would be sent. This is why Super Bowl ads are a big deal. They cost money just to place the ads. And tell everyone how much they cost to make — like Apple’s classic 1984 ad — and that boosts the whole exercise. Say what you want about crypto.com; they certainly believed they were going to do a lot of trading.
So when I wrote a post using AI, even though it was my ideas, well-written and signed off by me, the moment people thought it was AI (which, given that Malcolm Gladwell was unlikely to be reading my papers and writing about them), they knew I hadn’t done much “work.” In so doing, I was signalling that these ideas were not important enough to read.
Given this, here is the ranking I think most people would have on this stuff:
Malcolm Gladwell writing an explainer of Joshua Gans’s “Market Power in Artificial Intelligence”
Joshua Gans writing an explainer of Joshua Gans’s “Market Power in Artificial Intelligence”
Random other person writing an explainer of Joshua Gans’s “Market Power in Artificial Intelligence”
Just reading Joshua Gans’s “Market Power in Artificial Intelligence” in the original
AI writing an explainer of Joshua Gans’s “Market Power in Artificial Intelligence” in the style of Malcolm Gladwell
AI writing an explainer of Joshua Gans’s “Market Power in Artificial Intelligence”
This isn’t a ranking based on efficiency. With AI, 5 tops that. The above ranking is based on “proof of work.” The more work, the stronger the signal that reading is worth your time.
The bottom line is that this is great news for humanity. At least in so far as communication is concerned, we have limited attention. We do not want stuff that is fully automated and produced cheaply. We need someone actually to pay a price to produce it. This will matter for all sorts of artistic endeavours. It is why AI may help create music, but ultimately, it will require someone to spend many hours doing work of some kind. It is why AI can generate movie-quality videos, but someone will have to spend time actually crafting it out. Now, that price will never give Avatar a plot and message that is of cultural significance. But it will make us happy to spend three hours watching a carefully crafted visual experience.