They say AI is going to be the End of Us ... Again
There's another letter of concern and I don't know what it achieves.
Another day, another group of AI experts and other concerned people have signed a letter about the risks posed by AI. This letter, signed by AI pioneers Geoff Hinton and Yoshua Bengio, as well as business leaders like Sam Altman (OpenAI) and Demis Hassabis (Google DeepMind) and hundreds of others, contained the following statement.
That’s not an excerpt. That’s the whole statement. It is getting media attention. But for what?
One way of reading this statement is that it actually says nothing other than the very obvious. Take any THING and write:
Mitigating the risk of extinction from THING should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Who could disagree with that? We are all in favour of mitigating the risk of extinction from anything whether it be AI or unsanitary telephones (that last one is a deep Douglas Adams call-back). Certainly, we should probably prioritise things that involve a risk of extinction above those that don’t even though we don’t have a good track record of doing that.
But alas, I am an economist, so I have to make things difficult. Is what they are saying that if we list all the extinction risks, that would include pandemics, nuclear war, climate change, asteroid impact and another TV ending like Game of Thrones, we should put as many resources towards mitigating AI risk as we would put mitigating pandemics and nuclear war? And more than climate change and asteroid impact? If so, then I have to say that it doesn’t make much sense.
One way that it doesn’t make much sense is that we are hardly moving the long-term needle on pandemics wiping us out. Yes, it is less likely that we have pandemics, but take a thousand-year perspective, and it doesn’t seem like we are trying that hard right now. Also, we don’t all seem to be in agreement on the ending the nuclear war risk front.
But it is important to understand that surely, right now, AI existential risk — not just causing misinformation or losing jobs etc — is a purely hypothetical risk. Hypothetical risks can be a concern but this one, compared to the hypothetical risk of a superbug or an asteroid hit, is really very much still an academic exercise. This surely makes it less of a priority than the other existential risks that have far more real evidence behind them.
The point here is that, yes, we can allocate and should allocate resources towards mitigating existential AI risk. My guess is that even if we were being generous that would be a $100 million of R&D and monitoring effort. This is compared with the surely $100 billion that we should be spending to deal with pandemics but are nowhere close to doing and that risk is as real as we get.
The folks putting out these statements have to ask themselves what they are achieving here. This one is either so obvious as to not move any needle or so wrong that it shouldn’t move any needle.
Anyhow, for those who enjoy thinking about doomsday scenarios, check out the book by my co-author Andrew Leigh on the subject.