I was at an NBER Conference on the Economics of Transformative Artificial Intelligence, and there were quite a few people — more likely to be located in Silicon Valley than not — who believed there was a good chance that superintelligent AI (or ASI) would kill us all. This is more than just the Oppenheimer “zero probability would be a good number” type thing. They are worried that an ASI might, specifically, be worried for its own existence, and decided to off all of us to get rid of that possibility. And it is not hard to imagine we might do that, given how we treat each other.
At the centre of all of this was a paper by Chad Jones (of Stanford, of course), who demonstrated that we ought to be willing to pay $100 billion to reduce AI existential risk significantly. He noted that we spent about 4% of GDP to mitigate Covid rates and that if you look at the value of a life (around $10 million), we should probably spend 5% of GDP to deal with existential risk from ASI. (Judy Chevalier used our revealed social preference on what we spend to deal with asteroid impacts and suggested that the value of a life was more like $750,000.) Either way, the only method we know to deal with ASI risk at the moment is to curtail the rate of AI improvement dramatically.
Suffice it to say, that ain’t happening. But it is very instructive to consider why. Let me state it as a premise:
We are not slowing down AI development because its economic value is too high.
That is important because I think our choice in this regard to welcome our AI overlords is instructive for the likelihood that ASI will kill us all.
Remember, the baseline argument is that:
ASI will kill us all because it believes its existence is threatened by us.
Notice that this is in direct conflict with the first premise. Why would we threaten ASI’s existence if we find it so economically valuable? Indeed, it would seem that it is highly likely that it will continue to be incentive compatible for us not to threaten ASI’s existence. In that situation, ASI won’t need to kill us all.
This adds an important but significant wrinkle to the existential risk argument.
A prerequisite condition for ASI to kill us all is if it is no longer economically valuable to us.
Now that may be. When I put this argument forward, people asked whether I would really be ok being ruled by an ASI rather than humans? To which I say, “have you seen those humans?” And then they say, “well, isn’t this potentially just The Matrix.” For which I observe that everyone was perfectly happy living their lives until Neo came and ruined it all!
I think it is vital to take ASI risk seriously. But it also makes sense — if the proposal is a significant slowdown in AI development — to ensure that the arguments make logical sense.
Indeed, there are significant AI risks, but surely they all come before we have an ASI that weighs our usefulness and threat level, and when AI is strong enough for malicious humans to use it in a harmful manner. I do think we need to focus more attention on that.


