Should we slow down the development and adoption of AI?
Yes, seriously, that is a question people are asking alot these days
My main field in economics, if I have one, is the economics of innovation. That field asks the question:
What can we do to get more innovation and have more innovations adopted?
This question has an inherent bias — that it presumes that innovation is good and more of it is even better. There is a common sense notion to this but, in economics, it comes from the logical consequence that the only way to obtain productivity increases — that is, getting more out of what we have — is to come up with new ideas and implement them.
There are, of course, caveats. When we talk about “innovation,” it is a term loaded with goodness. We don’t tend to put in that bucket ideas for a drug that causes birth defects, a narcotic that is more addictive, processes that involve controllable pollution or the Juicero.
These are, technically, innovations — the result of new ideas — but they are not innovations that we want to have adopted. So we put in procedures to ensure they cannot be adopted without being declared safe, they are prohibited from being adopted or that we don’t buy them.
Given all of that these days, I feel like I am in a bizarro world where many want less innovation or at least a much slower rate of innovation adoption. This has all come about because of AI and, in particular, GPT-4 and other LLMs. I have attended numerous forums on the regulation of AI over the past few months, and the vast majority of voices are advocating some version of “shutting it down until we know the heck what is going on.” In other words, the presumption with AI is shifting from innovation is good to innovation is bad. This is all the more surprising because, as far as I can tell, AI hasn’t yet done the terrible things or even mildly terrible things people worry about. Instead, it is the concern that those things might happen that is driving the presumptive shift.
And they can be persuasive. Daron Acemoglu, who, in economics at least, is one of the leading voices advocating a slowdown in AI adoption, puts it this way (quote paraphrased from memory):
In 100 years, if AI adoption is delayed by five years, will anyone notice? If we deploy a harmful AI, they will notice.
This turns out to be a powerful argument and persuasive to many who might be in a position to regulate AI. But there is a lot going on under the hood in that statement, and so today, I want to lift that hood up and look at its parts.
First, however, I should say that I “get” the potential concerns from AI. It is a technology that is software-based and can be deployed at scale. It is a technology that can be deployed in an automated way and so could go off doing things without human control or even observation. And no one seems to really have a good handle on how it really does what it does, especially when it is doing things very well. My point here is not to diminish the concerns. I am quite happy to accept them. The question is whether that should drive us to slow down development and adoption below the pace it is currently going.
Time Horizon
Let’s start with the first assumption in the Acemoglu thought experiment — the “in 100 years” part. Rhetorically, it is hard to argue when the test is placed a century from now because we tend to like to think what we are doing is good for our grandchildren. Those in favour of innovation use this trick by saying if we increase the pace of innovation by a few percentage points each year via the miracle of compound interest in 100 years, we end up with multiples of the wealth we would otherwise have.
We should ask ourselves whether we should discount the next five years so much. For instance, take an AI startup that is being deployed in pools to predict whether someone is drowning and alert people quickly. It seems cold comfort to people right now to not have that AI available right now. Or, more generally, do we want to limit AI that is being deployed to help scientists find new medical treatments for various fatal conditions? Put simply, if those succeed, then it is the difference between a grandchild being born and around in 100 years versus not being there.
My point here is that we should recognise that a choice of 100 years’ time is somewhat arbitrary. Without having a specific reason to favour one generation over another, it is something that often evokes feelings more than driving policy today in one direction or another. We have used it to promote innovation, but really I have to admit that any policy moves we have made to promote innovation have come because of their immediate impacts.
The Length of the Delay
Now let’s consider the next part “if AI adoption is delayed by five years.” Where does the five years come from? It is not far off what we do, in normal times, to evaluate the safety of drugs and other medical interventions. To run randomised controlled trials and surface unintended side-effects, it can take years to do that safely going from animals to patients to a broader population in the case of vaccines.
Loaded in this statement is that policy being considered for AI is some sort of pre-clearance over seen by an organisation designed to operate like the FDA. But it could also be construed as some estimate of the delay that might come because innovators are worried about the application of product liability law down the track. These things, by design, tend to delay the deployment of innovations because of a general concern regarding safety.
But we need to be careful about the “five years” assumption. Is it “just” five years, or is it more? And it can be more. My colleague Alberto Galasso and his co-author Hong Luo found that increased regulation of innovation deployment can have reverberations right down long vertical supply chains and chill innovation across many domains beyond the one that is being regulated. The effects were very large, and this was a domain where the law was relatively well-established. Imagine what might happen in a domain like AI where there is even more uncertainty surrounding these things.
The point here is that we have to be careful about accepting that it would be a five-year delay when it could be decades or destroy an entire line of research entirely.
What do we do with the time
The “delay of five years” thought has another part to it — a somewhat oddly pessimistic part for the regulatory cause in its rhetoric. It invites us to think the regulation is potentially useless and all it does is delay adoption. But in that case, why delay at all?
The point of a delay is to do something with the time you get. The idea is to conduct more research into the safety of AI before it is deployed. On the one hand, this is a call to understand more about what AI algorithms are doing and how they work before deploying them. This is surely a good idea. It is better to understand things before putting them out there as that understanding gives us a prediction of things that could likely be unsafe and also gives us the foundation for correcting them. It is good science. I won’t argue with that except to note that there is inherent uncertainty even in this process. We regularly deploy things that we understand “mostly” rather than with certainty. The point, however, is we try to understand things as much as we can before using them.
But the other side of this goes beyond understanding. The whole FDA process for drug approval involves a staged move towards ever larger numbers of people who would try the drug treatment. It builds in it the process of experimentation with the next stage, allowing a larger group of people to be exposed. The experimental methodology here makes sense as our presumption is that biology does not change that much as we move from animals to sick humans to well humans. Thus, the experiment itself is believed to scale. (Although even this is imperfect because of well-known differences among people).
Would such a process work for AI deployed beyond the medical realm? For any AI directed at predicting physical things, the answer is probably yes, but these are also areas where there exist product safety regulations. The AIs that are unregulated are ones that impact human behaviour. That is the issue with ChatGPT, the algorithms that drive our information feeds and anything else designed to interact with humans at a cognitive level rather than a physical one. Can lab experiments scale for those AIs?
There is a sense in which it can. For instance, many experiments using LLMs are showing their productive benefits. A large study released last week is a good example:
But what about unintended consequences, bad actors or equilibrium consequences? Experiments in the lab or field are good at providing first-order impacts, but what about impacts that only reveal themselves as uncontrolled adoption takes place? Suppose you look at the concerns regarding AI. They are almost all in this latter category: A bad actor takes an LLM and uses it to engage in cyberattacks, or the continued use of algorithms leads to the undermining of democracy.
The point here is that there are real reasons to believe that only true “market” tests can reveal these issues with AI. If we delay deployment by five years, are we just delaying the time at which we can identify those unintended and harmful consequences? In that respect, we are just pushing problems down the road while at the same time delaying all other AI adoption by five years. My personal belief is that part of the innovation process is surfacing bad outcomes and trying to deal with them as soon as possible and I am not confident that, with respect to many of the concerns regarding AI that we have not otherwise dealt with elsewhere that ex-ante regulations can do much good.
Can we deal with “bad” AI?
I am not going to say much about this, but it is useful to note that a presumption in the Acemoglu thought experiment is that we can deal with the “bad” AI. That is, if we were able to learn that an AI algorithm is bad, can we actually prevent its deployment? I personally think that, in many cases, we can. Regulation can be effective, which is one of the reasons why I am worried about its blanket use. But many have argued that AI algorithms can be picked up by, say, bad actors anyway, and so delaying legitimate uses of AI in lieu of that possibility is fruitless.
At the same time, it is useful to point out that ex-post regulation still exists to deal with “bad” AI. If we find that an AI is harmful, we have the opportunity to regulate it after the fact. There are issues there, but it is an option open to us.
The incentives of AI innovators
The economics approach to regulation is that the case for regulation rests on a market failure. And there are potential market failures. There are reasons to believe that firms will underweight safety in their research and products. But equally, there are market pushes towards these things. Put an unsafe AI out there in the market place then, when that gets discovered, it can really harm a brand. As an example, consider the early deployment of Apple Maps — not necessarily AI but software, which is close. It has been over 10 years since these were deployed alongside very public glitches relative to Google Maps. These days, those issues are gone and yet Apple Maps is still plagued by the reputational damage from that early deployment. Firms have really strong incentives to try and make products work out of the box. AI adoption is littered with such issues, which is perhaps why hundreds of millions of dollars are being spent doing the sort of experiments that regulators might otherwise insist on before AI products are released.
Perhaps for this reason, the big players in AI have been seemingly willing to rock up to regulators and support regulation in the name of safety. They are already doing those things to protect their brands. Thus, what they get from regulation is that the same conditions will be imposed on those who are looking to deploy AI but don’t have a brand to protect. Those startups have an advantage — long-studied in economics — precisely because they are unencumbered. It is one of the reasons why competition can spur innovation. And similarly, it is one of the reasons why incumbents may find themselves in favour of regulating even if it includes them.
It may well be that we don’t want startups to be able to deploy AI that proves to be unsafe. But at the same time, I have to admit to being suspicious when the big and powerful are pushing for more regulation. When they do so, it is easy to imagine that their agenda involves protecting themselves from competition in the name of security and safety. Indeed, those motivations are the catch-cry of those with market power historically. In that regard, the rhetoric being provided via the Acemoglu thought experiment can be seen as playing into the hands of the powerful, and I find it hard to believe that they intend that.