It isn’t every day when a new paper by two recent Nobel prize winners and one future Nobel prize winner annoys you, but I was very much annoyed by this one: “Building pro-Worker AI” by Daron Acemoglu, David Autor and Simon Johnson, written for the Hamilton Project.
Now you may ask: hang on, you aren’t pro-worker when it comes to AI? Well, the answer is: I don’t know because I don’t know what pro-Worker AI really is, and from this particular paper, which claims to tell us, I am not sure anyone knows.
To be sure, however, I share with the authors of the paper a desire for welfare improvements from technology. I want the upsides to exceed the downsides. I also prefer technologies that would make outcomes more equal in income and wealth terms across the economy. That means, in general, technologies that increase the total share of income going to labour rather than capital are somewhat preferable to the other way around. So if there is such a thing as anti-Worker AI, then I don’t think that more of that is, on its own, a good direction to be moving.
But these general thoughts are not guiding things here. Instead, what is going on is an argument that governments (and others) should be actively promoting more of a thing called pro-Worker AI. At a minimum, that requires a definition of what characteristics of a technology make it pro-Worker.
Here’s what the authors say to set us on that journey:
We define pro-worker technologies—including AI—as technologies that make human skills and expertise more valuable by expanding worker capabilities.
They then immediately walk that back as being a definition that is “innocuous.” They write, “who could not want human expertise to rise in value?”
Well, it is not at all obvious that we want human expertise to rise in value. As one of the authors (Autor) wrote in a recently published paper entitled just “Expertise” expertise is a human capability that is also scarce. This is a slide from one of his talks on the subject:
It is not just a skill or superior capability but one that it not readily available.
What this means is that the definition of pro-Worker above is not innocuous but contains a paradox: A pro-Worker AI will expand worker capabilities (the first part of the definition of expertise) AND make it more economically valuable, which is normally associated with it being more scarce!
This tension impacts on their argument as they try to unpack when a technology will be pro-Worker or not. In particular, they list 5 types:
Labor-augmenting technologies make workers more effective at the tasks they already do.
Capital-augmenting technologies make machines (e.g., algorithms, processes, innovations) better, cheaper, or faster at performing their current tasks.
Automating technologies directly reshape the division of labor between workers and machines by substituting machinery or algorithms for tasks that were previously performed by workers.
Expertise-leveling technologies enable a new set of workers to perform tasks that previously demanded expertise from another domain(s).
New task-creating technologies create new human tasks.
Now, a normal person being asked about which of these would be good for workers and which would not, would probably put 1, 4 and 5 in the pro-Worker bucket and 2 and 3 not in that bucket. But without going into details, here is where they end up:
There is only one unambigously pro-Worker technology: one that creates new tasks. What sort of technology does this? They give one example:
Innovations such as ethernet networks, fiber optic cabling, and occupancy-aware heating and lighting systems have vastly increased the quantity and complexity of electrical cabling required in modern buildings. Befitting this complexity, workers now require specialized expertise to plan, install, and maintain these systems—expertise that was neither relevant nor prevalent prior to the introduction of these technologies.
Basically, when a technology is so different that people have to do new things in order to make it work or adopt it, that is generating new tasks.
But what about the rest? Let’s look at one that I have always thought was unambiguously good to have: expertise-levelling technologies. Why do I think it is so great? Because that’s my job. I take people without skills and provide conditions under which they get skills. Education is an expertise-levelling thing. If I stopped, then there would be fewer people with the skills akin to those who have them.
So you would think that the authors would also be of the same view, as they are also in the education biz. But no.
Expertise-leveling technologies typically raise productivity by reducing costs. While firms almost always benefit from lower costs, the effects on workers of expertise-leveling technologies are usually mixed. By expanding the effective capabilities of less-expert (typically less-well-paid) workers, expertise-leveling technologies frequently make these workers’ expertise more valuable and hence boost their earning power.
You can see the earlier tension rearing its ugly head. Improving worker capabilities reduces the scarcity of expertise. Therefore, if you were a worker commanding a premium in the marketplace from your scarce expertise, if others can now compete for those jobs because of expertise-levelling AI, then your income may fall even if the incomes of workers now benefiting. If I care about the aggregate income going to workers, then, like the fact that a monopolised industry has more profits compared to a competitive one, it is entirely possible that income going to all workers may fall.
As I said, however, the same is surely true of education. Back in the day, when only a few people could read and write, those who could did very well. Then they started teaching everyone, and lo and behold, reading and writing weren’t that big a deal anymore. But I don’t recall anyone ever saying that universal education or increased access to higher education was not pro-Worker. And I don’t think these authors believe that either. The question, then, is what to make of what they write here?
The answer is surely: not much. They turn to consider examples of unambiguously pro-Worker technologies and end up with one close to my heart: an AI teaching support tool. They write:
What part of this technology is pro-worker (proteacher)? While numerous automated teaching offerings attempt to reduce the need for teachers, a successful rollout of an AI tool of the sort might increase teacher efficacy, enabling them to focus their scarce attention in places where it is most needed, and to engage the students at the level that is most helpful for them. The tasks that this tool will enable teachers to perform are more sophisticated and potentially novel, satisfying one part of our definition. Teachers will also have to develop new expertise for harnessing this tool and for mastering a more flexible approach to teaching, thus satisfying the second part of our definition for pro-worker AI. This form of classroom education would likely be still more effective if additional teachers, working in tandem with the tool, were available to support subgroups of students according to their learnings needs. Schools using this technology also might ultimately want to hire more teachers.
They end up here saying stuff I like to believe will happen with these tools, that it will alleviate attention constraints, lead to more sophisticated learning and require new expertise. But you can also see that they aren’t sure. Why? Because you can’t be sure. This sort of tool may allow more people to teach effectively, which would then be expertise levelling, even if it is great for the places where teaching resources are scarce.
Therein lies the point. It is hard to judge technologies and predict their consequences. That means that, as economists, we need to start by thinking the way we usually do: is there a reason to think that markets are failing in a way that biases the technologies being produced and adopted in a way that might harm a group or worse, everyone? That is how we decided fossil fuels were a problem because we didn’t price in the cost of pollution. But with respect to AI, a pro-Worker approach or something like us would have to identify what is not being priced by the market properly. To be sure, they do spend a little time trying to answer that. But at least to me, the line between that analysis and the type of promotion of certain technologies was really hard to draw.
Instead, we are left with a laundry list of unobjectionable policies such as building AI expertise in government, encouraging competition, and encouraging excellence (why not?), giving workers a voice, tax reform, preventing expertise theft (which I think is stronger copyright enforcement), and getting rid of occupational licensing (where I can think of current licensed workers who won’t think that is pro them at all; just ask a doctor). None of these has any obvious line to a cause based on some sort of bias preventing certain types of AI. It ends up just being things the authors would like to see dressed up in a soup of AI fear. It isn’t the way policy advocacy should work.



