In my previous post, I argued that in communication, to know whether something is worth your attention, you often need to know the creator of some content actually did some real work. “Proof of work” created a signal that they thought this would interest you, which you might infer means it is worth your attention. If you just put out content cheaply using AI, is it really worth your time?
What’s going on here is that “reading” or consuming content is not automatic. Because we have limited attention, we engage in a pre-decision regarding whether we are going to allocate that attention (that is, incur an opportunity cost) and delve into some content. The youngins even do this on TikTok which is a way TikTok works out what grabs attention. It is a critical part of the process.
However, I did receive some pushback from readers on my post. They said that they liked having an AI take on things even if it took attention. Part of this was simply saving time reading a longer piece it was summarising (as was the intent of my first post in this series). However, the other part was that the AI summary was good, and the fact that I chose to share it was useful. In other words, the proof of work wasn’t necessary.
I was thinking about this when the Internet pointed me to these interesting AI experiments of previous famous films but in an older style. People used to create these without AI, but now AI makes them possible. And even though it is noticeably AI-generated, as I watched, I found that it was worth my attention even if the work wasn’t as good as it had been previously.
There is actually a ton of work in this. But it is all in the prompt and the signing off. It is not in the work of creating the images or video. That is all AI as is the script itself.
All this is to say that these things are subtle. For the moment, however, imagine that the work involved is very, very cheap and even the prompt and signing off can take mere seconds as was the case in my first post.
One imagined equilibrium is that we see a long piece of content and no longer believe any work went into it. That means that the signal value of “quantity” for “quality” has gone. If there was a mechanism to show that the work was your work but not AI-generated, then a signal could be restored.
But there is a sense that the future is kind of going in the wrong direction. We actually want people to use AI tools as part of the process of producing content. The above video is an example of that. But it is also an example where it is possible to see that there was work involved.
What about my post from the other day, which did not involve work? As I noted, some people would find that useful, especially if it was more my voice than Malcolm Gladwell’s. Or maybe not. Maybe they prefer Gladwell? There is some value in summarising formal pieces or in developing a very simple concept and making it more developed and accessible. The question is: why does that feel like a bad outcome?
The answer, I believe, is this: the whole point of generative AI is to generate things easily that are more salient to people personally. The current model of producing content is for the ‘author’ to create the content and publish it to the world, who can choose to consume it or not. The author’s offer is ‘take it or leave it.’ And many creative people are very protective of that single-offer way of doing things. But what you got the other day as a newsletter was ‘Joshua’s Version.’
What AI does is allow you to flip things around somewhat. Here’s what I should have done with my technical “Market Power in Artificial Intelligence” paper …
Is market power in AI inevitable (Your Version)
I have been a bit busy of late so I didn’t have the opportunity to write about a new paper of mine, “Market Power in Artificial Intelligence.” It’s a technical survey paper so it is a bit challenging to write about.
So here is a GPT that allows you to summarise the paper and ask questions about it.
That’s it. I have told you about the paper and then created a means for you to learn more based on your own interests and preferences. I have neither done any work nor pretended to do so. Instead, I have empowered you, the reader, to do that and lowered the costs of doing that by creating a GPT.
The point here is that AI should challenge us to think differently about authorship and publishing and what we want people to do. Moreover, it can do more than just summarise the paper. You can ask it specific questions and even ask it to identify the paper’s flaws. You can also ask it to create teaching slides in case you want to talk about the paper in class. You can even ask it how the insights might apply to Google or anyone else.
In other words, the problem with my previous post is that it was ‘single offer.’ What I needed to do was to provide the means for the recipient to choose the form of the message because it is now very cheap for them to do that.