Those arguing that AI is biased against labour have work to do
I take on Daron Acemoglu's argument that governments need to do something about AI
Hot off winning a Nobel prize this week, Daron Acemoglu penned an oped in the New York Times that focussed on the problems with AI. For those who don’t know, Acemoglu is one of those people who when they win a Nobel prize people ask “what for?” not because they don’t know what he has done but because he has done so much it was not obvious which thing they would give him the prize for. It turned out to be for a line of research that “democracy is good for economic growth.” His oped, however, was on Acemoglu’s other research regarding what forces push technological change in which direction. In my opinion, that research question is complex and Acemoglu has been the first economist in modern memory to make considerable progress on it.
Acemoglu is concerned AI research and adoption is focussed too much on automation of labour and not enough on augmenting labour.
Most of us today are involved in problem-solving, whether an office worker making loan or hiring decisions, a scientist or journalist trying to get to the bottom of a question or an electrician, carpenter or craft worker dealing with malfunctions and other real-world obstacles. Most of us can become more productive and expand our range if we get better information.
Yet, even more than with aging, it looks like we’re going to mismanage this wave. The industry is locked into a race centered on “artificial general intelligence,” meaning the inchoate dream of producing machines that are just like humans and can take over all tasks from us. It remains preoccupied with using this technology either for generating digital ad revenue or for automation.
In other words, the direction of AI technological change is biased against labour and towards capital where it could easily be the other way around. However, that argument is a theory and, frankly, has not been empirically verified for AI because there hasn’t been enough experience yet. The theory rests on there being implicit and explicit subsidies towards capital which allows AI that is what Acemoglu calls ‘so so’ to be adopted even if it is not ‘so so’ when it comes to replacing jobs. The empirical context rests on looking at past automation although that itself is controversial for many reasons. I’ll put my cards on the table: I don’t see the automation bias in AI or at least not a bias that is dramatically job replacing although it is early days yet. Instead, what measurement we have shows that AI is lifting lower skilled works to perform at a higher skill level which seems to be a generally good direction to be heading.
But Acemoglu is not without hope. He thinks that the bad effects of AI may not arise if tasks it replaces can be made up for with new tasks it creates. The idea is that there can be “more work to do.” The problem is whether the market will provide these or provide them fast enough:
The real promise of A.I. is unlikely to become reality by itself. It requires A.I. models to become more expert, better powered by higher-quality data, more reliable and more aligned with the existing knowledge and the information-processing capabilities of workers. None of this appears to be at the top of Big Tech’s agenda.
But here’s the rub: I don’t see what the argument is that the market won’t provide these things. Just pointing out that Big Tech has different interests than workers is not much of an insight. It’s true and pretty much the whole of economics has been about explaining why that isn’t necessarily a problem because there are forces to balance the interests. So to make a proper argument requires more work. Why aren’t economic forces going to do the job here? Yes, capital may have implicit subsidies but those existed in the past and new tasks were created.
So what to do. Acemoglu leans on good ‘old hand waving about more education and fiddling with the tax system.
One obvious policy to confront both the aging and the A.I. challenges is to encourage training of workers — for example, with tax credits or training subsidies, so that they can take on new tasks and jobs. Ms. Harris’s economic plan puts much more emphasis on this than Mr. Trump’s. Much more can be done.
These could be fine but what’s the evidence? David Deming (whose substack you should definitely subscribe to) wrote this week about these processes and the experience with telephone switchboard operators who lost their jobs but found new jobs in hospitality and office-work. Neither of these would necessarily have been obvious avenues for training programs or tax subsidies at the time.
And therein lies the point: is there any reason to think, in this case, that targeted government programs will encourage new task creation in the right places?
More broadly, the government can push more basic research in areas that will directly augment labour productivity. Acemoglu argues this as well:
It isn’t just that workers need to get ready. So do our technological capabilities. Here the federal government can play an important role, for example via a new federal agency tasked with identifying and funding the types of A.I. that can increase worker productivity and help us deal with looming labor shortages.
But this is good precisely because it hits on well-known market failures in basic research as I argue in a recent paper on the topic.
Before ending this, however, I do want to comment on a disturbing aspect of the “AI is biased against labour” school of thought. From a political rhetoric perspective, it taps precisely the same fears as the anti-immigration debate. This is something that the folks from South Park keenly observed when they noted that the arguments against automation and immigration were the same “they’re taking our jobs.” This is a parallel that I am sure some will not agree with but when I look at the economic cases made I see clear similarities and similarly holes.