Are Acemoglu and Johnson right about AI?
Sending this out again because I confused Acemoglu's co-authors the first time.
Daron Acemoglu and Simon Johnson have a new book, Power and Progress: Our 1,000-Year Struggle Over Technology and Prosperity. To say it is a sweeping treatise on the last millennium of technological history and economics would not do justice to it and would put too much pressure on brooms. Much of the book does something rare for economists: it examines power and how it operates and evolves. In particular, they are interested in how power arises from and is determined by persuasion. This is a topic that has interested me lately as well. By far, it is the most interesting and thought-provoking contribution of the book, and there is no way I could do it justice in this forum. They are laying out a case that political institutions, rather than pure economic forces, drive the allocation of power and economic value. It is easily the most important set of issues facing the world today, and economists have paid too little attention. This book will hopefully change that.
It is where the book ends — its views on AI — that I want to talk about here. Moreover, I am going to do something that turns out to be rare for any work by Daron Acemoglu — I’m going to argue that it is wrong. Now such a critique is a tall order for a book that lays out its argument using a thousand years of ebbs and flows of economic and political development as its groundwork.
The Acemoglu-Johnson View on AI
To outline their view on AI, it is important to consider their assessment of the impact of a previous general-purpose technology — electricity. They spend a relatively early chapter (6) on the topic. As we argued in our similarly titled recent book on AI, Power and Prediction, electricity was not simply a new way of generating energy but one that required systemic change to generate significant productivity increases. It took decades.
In the process, Acemoglu and Johnson argue (p.258) that it created a large number of new tasks for engineers to do as part of their jobs.
Managers collected information, sought to increase efficiency, started improving designs, and continuously readjusted production methods by introducing novel functions and tasks. The combination of engineers’ role in production, information collected by white-collar workers, and electricity was critical in the installation of specialized electrical devices and the new tasks that accompanied them, such as welding, punching, and specialized machine operation.
In this way, the reorganization of manufacturing enabled by white-collar workers created relatively high-paying jobs for blue-collar workers. As the scale of production expanded, there was even more demand for white-collar employees. Another dimension of job growth came from the linkages that new factories created for the retail and wholesale sectors. As they churned out more output and mass production became widespread, new jobs for engineers, managers, sales clerks, and administrators appeared in these sectors as well.
This led to a virtuous cycle that increased the demand for labour across economies. This is especially the case for workers with greater skills. Nonetheless, the first four decades of the 20th Century were hardly a great one for workers. It was only after World War II that the fruits of that became more widely distributed. To very simply summarise the argument: service industries that were required to support large-scale production and deep supply chains were overwhelmingly labour biased. There was just a ton more stuff for people to do, and people were hired to do it. Add to that a time when labour unions held weight, and you had the ingredients for shared prosperity.
It didn’t last. In the 1970s and 1980s, political institutions swung back against labour unions, and the growth in the new jobs petered out; that is, rather than create new tasks at a rate higher than population growth, it moved back to the same rate at least.
Herein lies the most important claim of Acemoglu-Johnson (p.304, emphasis mine):
In some popular discussions of rising inequality, globalization is pitted against technology as competing explanations. It is often implied that technology represents the inevitable forces leading to inequality, while there is some degree of choice about how much globalization and import competition from low-wage countries the United States (and other advanced economies) should have allowed.
This is a false dichotomy. Technology does not have a preordained direction, and nothing about it is inevitable. Technology has increased inequality largely because of the choices that companies and other powerful actors have made.
That the direction of technological change is a choice is something that is the cornerstone of 30 years of Acemoglu’s work. It is a choice influenced by things such as relative prices and regulations. But even within those forces, the big directions — whether you are making technology to save labour costs or to directly enhance the productivity of the workers you have — are ones where movers and shakers have considerable discretion. More critically, in places where labour had more power, the choices made, they argue, were different.
This is certainly a controversial argument, although one that Andrew Leigh and I also put forward in our 2019 book, Innovation + Equality. The counter-argument comes from businesses who often claim they have no choice but to automate in order to remain competitive in their industries. Acemoglu-Johnson point out that across countries with different degree of labour power through institutions, the choices are different and, near as can be told, those countries have globally competitive businesses. It is a compelling case against the inevitability of these things.
Which brings us to AI. Like many technologies before it, Acemoglu and Johnson see the applications of AI as being ones that involve automation but not inevitably so. And like the choices made before it, if they are made by non-labour power, those actors will choose the direction of AI application to promote automation rather than something else. When it came to earlier applications of software, the nature of computer programs led to a focus on automating tasks that were routine (p.348).
Routine tasks are performed repetitively, embedded in a predictable environment. For example, typing is routine. So are knitting and other simple production tasks that involve a significant amount of repetitive activity. Software has been combined with machinery that interacts with the physical world to automate various routine tasks, exactly as Jacquard intended, and modern numerically controlled equipment, such as printers or computer-assisted lathes, regularly accomplish this. Software is also an integral part of robotics technology used extensively for industrial automation.
They argue that there were few routine tasks, and many of the things people did and still do are non-routine and require skills in problem-solving — stuff that cannot be easily programmed.
AI, they argue, changes that. It can automate tasks that we consider less routine. It is hard to argue with that. Even prior to ChatGPT, chatbots were created to handle many customer service queries, with only the hard stuff being moved on to human service. As AI can be trained on what humans are doing, there has been a process by which what is being counted as the “hard stuff” is becoming smaller and smaller. It is hard to look at that trend and not think that the large employment numbers in remote customer service are going to start a reasonably rapid decline. I certainly have no dispute over that forecast — we explicitly made the same forecast in both of our AI books. But I think there is more going on, as I will argue shortly.
To summarise their argument. Modern AI allows computers to watch people work and then imitate what they are doing. It does this so well that it can potentially imitate humans in tasks they apply thought for by providing a prediction of the stimulus-response of people even for tasks we would naturally regard as more complex.
AI developments are certainly capable of automating tasks in this way. How far they can go reliably is still an open question. But the last few months have certainly caused most to think that they can go further than we expected before. The question is: should we stop this process? And if so, is there a way to use AI for any benefits?
Those aren’t quite the questions that Acemoglu and Johnson tackle. Instead, they argue that AI will be applied to automate tasks with only limited productivity benefits.
In the age of AI, there is a fundamental reason for so-so automation. Humans are good at most of what they do, and AI-based automation is not likely to have impressive results when it simply replaces humans in tasks for which we accumulated relevant skills over centuries. So-so automation is what we get, for example, when companies rush to install self-checkout kiosks that do not work well and do not improve service quality for customers. Or when skilled customer-service representatives, IT specialists, or financial advisers are sidelined by AI algorithms, which then perform badly (pp.360-1).
Many of the productive tasks performed by humans are a mixture of routine and more complex activities that involve social communication, problem solving, flexibility, and creativity. In such activities, humans draw on tacit knowledge and expertise. Moreover, much of this expertise is highly context dependent and difficult to transfer to AI algorithms, thus likely to get lost once the relevant tasks are automated.
So here we see their concern. The problem with AI is not that it is super-productive but not much more productive than what people do. And this is the case when you take a short-term view of what people do. What you consider tacit knowledge and experience, a longer-term view, you would see that AI adoption is not really a productive substitute for all of that. Basically, AI tempts you to throw out babies with the bathwater, especially if the babies are crying. When powerful people don’t really care about babies, they will take the short-term view.
Why I think Acemoglu and Johnson are Wrong on AI
There are two basic assumptions that drive the Acemoglu-Johnson view of AI:
That AI, right now, will not afford large increases in productivity.
That those choosing to adopt AI will more often than not adopt a short-term view of worker contributions than a long-term one.
Both of these assumptions, I believe, are unfounded and likely to be false.
On the first one, large language models and generative AI have demonstrated that there is a potential for massive productivity increases right now from AI adoption. For example, the productivity of computer programmers has been lifted by as much as fifty percent using tools such as GitHub’s co-pilot. In many medical applications, it is simply the case that AI can reduce error rates by significant amounts. Put simply, I am not at all sure AI adoption that is currently taking place is “so-so.” On the contrary, it is only where it is significant that the adoption is occurring. That is, in many respects, the central argument of our book, Power and Prediction. Adopting AI to automate or otherwise transform jobs is hard. Thus you only do it if the AI advance is significant enough to justify the myriad of system-level changes to deliver productivity benefits.
On the second one, I have not seen evidence that businesses are adopting based on a short-term view. Acemoglu and Johnson themselves argue this. As they note, AI experts thought we should stop training radiologists five years ago (p.364-5).
Nothing of the sort has yet happened, and demand for radiologists has increased since 2016, for a very simple reason. Full radiological diagnosis requires even more situational and social intelligence than, for example, customer service, and it is currently beyond the capabilities of machines. In fact, recent research shows that combining human expertise with new technology tends to be much more effective. For example, state-of-the-art machine-learning algorithms can improve the diagnosis of diabetic retinopathy, which results from damage to blood vessels on the retina among diabetic patients. Nevertheless, accuracy increases significantly more when algorithms are used to identify difficult cases, which are then assigned to ophthalmologists for better diagnosis.
Exactly. Even AlphaZero has been defeated (for now).
The case Acemoglu and Johnson make that AI provides perhaps the highest level of risk for the types of disengagement between the forces of power and the distribution of progress is weak. That is not to say that it isn’t possible. There is significant uncertainty here. My point is that having spent a decade studying the developments of AI, from where I sit, the core assumptions underlying their gloomy prognosis have significant evidence already emerging to the contrary. There is a better case for hope than fear right now.
Summary
As powerful and interesting are Acemoglu and Johnson’s core arguments in this terrific book, it is hard to embrace their very serious concerns about where AI might take us. They also raise concerns about the impact of AI on democracy. I am not so sure about those either, but I will leave those discussions — which are more complex — to another day.
Nonetheless, even if I am not worried about AI, I do remain worried that we have an uneven distribution of power that does not provide confidence that socially optimal, consequential choices will be made. I remain hopeful about some of the new technological improvements while also intensely worried about the way choices are made in our political and business institutions.