It has been a little while since my last newsletter. I have been finishing up a research book, The Microeconomics of Artificial Intelligence, that will be published by MIT Press next year. Related to that, I have been putting out some of the work there as new papers. Here is a summary of each:
“Will User-Contributed Training Data for Generative AI Eat its own Tail?”: This paper explores whether AI systems trained on user-contributed data might eventually reduce the supply of such data, potentially hindering AI development. Contrary to concerns, the research finds that AI encourages users to focus on more complex and valuable contributions, expanding the overall range and quality of available training data. While the AI handles simpler queries, users shift towards providing information on more advanced topics, resulting in a more comprehensive knowledge base. The study also examines the potential of paying for contributions but suggests that such incentives may have limited effectiveness in this context.
“How will Generative AI impact communication?”: This paper examines how generative AI (GAI) might impact communication between senders and receivers of messages. It explores two scenarios: one where GAI helps make messages more salient (attention-grabbing) and another where GAI affects the ability to signal message quality through effort. In the salience model, GAI generally improves communication by making it easier to create attention-grabbing messages, although in some cases, it may reduce receiver welfare. However, in the signalling model, GAI can actually hinder effective communication by making it harder to distinguish high-quality messages, potentially leading to a decrease in meaningful information exchange. (This was a formal model of a previous newsletter.)
“Demand for Artificial Intelligence in Settlement Negotiations”: This paper examines how AI might impact settlement negotiations in legal disputes. It finds that while AI can potentially resolve uncertainties about trial outcomes and facilitate settlements, parties may choose to settle before purchasing AI predictions to avoid the cost. However, when parties have different expectations about trial outcomes, AI predictions become valuable as they can help resolve these differences and lead to settlements. The paper illustrates this with an example where a plaintiff and defendant disagree on the likelihood of the plaintiff winning at trial - in this case, one party may purchase an AI prediction to resolve the uncertainty, leading to a settlement and avoiding a costly trial.
With that update, on to the main point of the day. A few days ago, a previously unseen talk by Steve Jobs from 1983 became available. It was called “The Objects of our Life”; you can view it here (just scroll down to the bottom for the full-hour talk).
What I want to do is highlight a few sections. Jobs is trying to explain computers to a crowd that is not very familiar with them. It resonated because we have been faced with that same challenge with respect to AI over the last 6-7 years. Here is part of Jobs's explanation of computers to the crowd.
Third thing about computers, they’re really dumb. They’re exceptionally simple, but they’re really fast. The raw instructions that we have to feed these little microprocessors, even the raw instructions that we have to feed these giant Cray-1 supercomputers, are the most trivial of instructions. They’re: Get some data from here, get a number from here, fetch a number, add two numbers together, test to see if it’s bigger than zero. Go put it over there. It’s the most mundane thing you could ever imagine.
But a key thing about it is that, let’s say I could move 100 times faster than anyone in here. In the blink of your eye, I could run out there and I could grab a bouquet of fresh spring flowers or something. And I could run back in here and I could snap my fingers, and you would all think I was a magician or something. And yet I was basically doing a series of really simple instructions: moving, running out there, grabbing some flowers, running back, snapping my fingers. But I could just do them so fast that you would think that there was something magical going on.
It’s the exact same way with the computer. It can go grab these numbers and add them together and throw them over here at the rate of about a million instructions per second. And so we tend to think there’s something magical going on when in reality there’s just a series of these simple instructions. Now, what we do is we take these very, very simple instructions, and we—by building a collection of these things—build a higher-level instruction. So instead of saying, “Turn right, left foot, right foot, left foot, right foot, extend hand, grab flowers, run back.” I can say, “Could you go get some flowers? Could you pour a cup of coffee?” And we have started in the last 20 years to deal with computers in higher and higher levels of abstraction. But ultimately, these levels of abstraction get translated down into these stupid instructions that run really fast.
You will notice that he distils computers down to their essence to explain what they are doing — basically arithmetic. This is similar to how we can view AI as Prediction Machines. It is both a more accurate description and accessible at the same time.
Jobs then goes on to calibrate expectations about how transformation will occur.
Let’s look at the brief history of computers. Best way to understand it’s probably an analogy. Take the electric motor. The electric motor was first invented in the late 1800s. And when it was first invented, it was only possible to build a very, very large one, which meant that it could only be cost-justified for very large applications. And therefore electric motors did not proliferate very fast at all.
But the next breakthrough was when somebody took one of these large electric motors and they ran a shaft through the middle of a factory and, through a series of belts and pulleys, brought…shared the horsepower of this one large electric motor on 15 or 20 medium-size workstations, thereby allowing one electric motor to be cost-justified on some medium-scale tasks. And electric motors proliferated even further then.
But the real breakthrough was the invention of the fractional-horsepower electric motor. We could then bring the horsepower directly to where it was needed and cost-justified it on a totally individual application. And I think there’s about 55 or so fractional-horsepower motors now in every household.
If we look at the development of computers, we see a real parallel. The first computer was called the ENIAC in 1947. It was developed particularly for ballistic military calculations. It was giant. Hardly anyone got a chance to use it.
The real breakthrough…the next real breakthrough was in the sixties with the invention of what was called time sharing. And what we did was we took one of these very large computers and we shared it. Since it could execute so many instructions so quickly, we’d run some on Fred’s job over here, and then we’d run some on Sally’s job, and we’d run some on Don’s job, and we’d run some on Susie’s job. And we’d share this thing. And it was so fast that everyone would think they had the whole computer to themselves.
Time sharing was what really started to proliferate computers in the sixties. And most of you, if you’ve used computer terminals connected with some umbilical cord to some large computer somewhere else, that’s time sharing. That’s what got computers on college campuses in large numbers.
The reason Apple exists is because we stumbled onto fractional-horsepower computing five years before anybody else. That’s the reason we exist. We took these microprocessor chips, which is sort of a computer on a chip, and we surrounded it with all the other stuff you need to interact with a computer. And we made a computer that was about 13 pounds. And people would look at it, and they’d say, “Well, where’s the computer? This is just a terminal.” We’d say, “No, that is the computer.”
And after about five minutes of repeating this, they…finally a light bulb would go on in their minds. And they’d decide if they didn’t like it, they could throw it out the window or run over it with their car. But that this was the entire computer. That’s why we exist, fractional-horsepower computing.
Notice that he uses a historical example, electricity, to explain how real transformation occurred when people came to understand its true value. Clearly, we agree with this example, and electricity remained our primary example to explain this with respect to AI as well in Power and Prediction. Jobs’ point was that the transformational power of computers would not happen immediately, and there was important work to be done.
The whole talk is worthy of your attention. In surfacing it here, I’m not trying to suggest that we conceptualised AI just as Jobs would have done [Narrator: He was] nor that our insights with respect to AI should be considered Jobsian [Narrator: He is] but that there is a strong sense in which principles regarding technological transformation repeat themselves and that should be used to calibrate our expectations.