Yesterday, the White House had a meeting with seven AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — and they all agreed to eight commitments on AI. A commitment is something we can credibly believe these companies will do. So let’s take a look at those commitments.
The companies commit to internal and external security testing of their AI systems before their release.
The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks.
The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.
The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy.
The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges.
Let’s dig in.
Commitments 1 to 3 look very much like things these companies are doing anyway. That makes them credible for sure, but there is nothing new to see here. Similarly, 4 is something these companies tend to do anyway, but it does give something for third parties to point to if things aren’t being done in that regard.
Number 5 is something that they may or may not have otherwise done. But what does it mean for content to be AI-generated? Does this mean that there is some AI used? That’s unlikely. If you use Grammarly, which is AI-powered or some of the AI-powered tools in Photoshop or something, does that require some accounting? And it isn’t these companies, but their users who are really using the AI and this commitment says nothing about how that information might be transmitted to others. This commitment is, basically, that there will exist a system for this, but one suspects that people who don’t want to know their content is AI-generated will have little incentive to use it. Any economist would have, instead, asked for a system that allows people to certify content as “not AI created” which would have the same signalling impact but would not rely on the actors who have no incentive to signal to generate that signal. But as per usual, AI policy is economics free. Sigh.
Product liability laws already deal with Number 6 to some extent. The supporting text says, “This report will cover both security risks and societal risks, such as the effects on fairness and bias.” Well, isn’t that going to thrill the general counsels of these companies! Again, I am not sure the incentives for anything real are aligned here.
Number 7 is pretty darn vacuous. At best, it is an agreement that research would be nice, but let’s face it, in this regard, this is a clear case for it being the government’s job to promote and fund such research. They should do so.
Number 8 is, let’s face it, a ROFL-type thing. Well, at least for me. It is nice that they all would like to do that, and if you read the mission statements of each one, they already say they do. Well, maybe not really when you read further, “From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.” AI is potentially great. I agree. But it seems like this is just a commitment because a prime of commitments wouldn’t be good. I would like someone to commit to it though, but it does not deserve the commitment label.
Finally, I want to note here that a few really big companies were conspicuously missing from the table here — Apple, Tesla and NVDIA. These three companies are surely the real power-houses in terms of the actual use of AI at scale. Apple, for one, never uses the word “artificial intelligence” so I guess it would claim not to have an interest here. But come on. The White House couldn’t even convene the full suite of US AI companies for this whole exercise. I would like for these policy agenda-setting events to be serious, but right now it looks far from that.