Yesterday, Sam Altman, the CEO of OpenAI, testified before Congress. He called for the regulation of AI on various fronts.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.”
He actually proposed something relatively specific:
Mr. Altman said his company’s technology may destroy some jobs but also create new ones, and that it will be important for “government to figure out how we want to mitigate that.” Echoing an idea suggested by Dr. Marcus, he proposed the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.
This is notable not because he is acknowledging that AI involves costs as well as benefits; he was clear on where that equation tipped.
“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Mr. Altman said.
But instead that models such as the one provided by his own business be licensed. What precisely the criteria for the license might be are not clear. However, let’s face it, it is rare for companies to call for their own regulation.
One response to this is that it is a cynical ploy to limit competition to OpenAI for other firms; particularly startup firms which tend to start small and then scale up if they are successful in creating a valuable product. Those startups looking for funding will then face a need to have their product licensed — with all of the risks and delays that might cause, including satisfying criteria that were influenced by incumbents proposing them — which will potentially limit the upside of their potential and have a chilling effect on entry. Some folks at Google may think there is no moat around large language models due to open-source alternatives springing up, but a license requirement sure looks like a good way to dig and fill a moat with crocodiles. This is about as classic a restriction on entry as you can get, and because it is served up by the government, there’s no antitrust prohibition that is going to subvert it.
The other response to this is that it is an actual ploy to limit competition to OpenAI and allow them to control the evolution of the industry. Suppose some startups receive funding and end up succeeding. That will present them with two options: (1) go through the licensing process or (2) get acquired by an already licensed incumbent firm. Option (2) doesn’t look that bad an option for them. Moreover, thanks to (1), the price is likely to be quite favourable for OpenAI. Basically, the end result is an industry where regulation drives the startup space and gives incumbents a superior position, just as we have seen between biotechnology, the FDA and pharmaceutical companies.
That has the potential to take a potentially quite competitive market and turn it into a incumbents’ club. Worse still, it could happen at the start of a potential explosion in activity. Even the period of time when these things are openly considered may have a chilling effect.
It is very important, therefore, that legislators are attuned to these competitive risks and work to ensure they don’t come to pass. They need to create safe harbours and lower-cost licensing and disclosure processes that allow businesses to default to provide these models rather than having them have to pass hurdles upfront just to get to customers. Ideally, a registration and then liability regime for actual harm caused rather than speculation on that harm would be appropriate.
The point here is: beware of incumbents asking to be regulated. Their agenda may be quite different from one in the interests of social welfare.