Why I am not worried about Superintelligent machines killing us all
That's not to say we couldn't face some bad outcomes
Last night I attended the Munk debate on the topic:
Be it resolved, AI research and development poses an existential threat.
If you are a member, you can stream the debate. Here, let me do the TL;DV version.
The ‘Pro’ side was argued by Max Tegmark (an MIT physicist and author of Life 3.0) and Yoshua Bengio (the Turing award ML scientist). The ‘Con’ side was argued by Yann LeCun (a Turing award ML scientist and Vice President of AI at Meta) and Melanie Mitchell (an ML scientist at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans). Let me summarise the main points by each:
Max Tegmark: superintelligent machines have the potential to kill more people than a nuclear winter because (a) they may be programmed to follow objectives that have unintended consequences (aka The Alignment Problem); (b) they will out-compete us as a species; and/or (c) be used by evil people to kill us all.
Yann LeCun: trust us as engineers and scientists. We have had technologies that could kill us before, but we are clever enough to put in safeguards. He even has a list.
Yoshua Bengio: I have been completely surprised by GPT3.5 and GPT4.0 and believe this is all moving so fast that we have to take precautions now.
Melanie Mitchell: superintelligent machines either won’t happen any time soon because there is something important about being embedded in a biological and social system, and if they are there, their very intelligence will stop them from destroying the planet.
I am sure each would object to my summarisation of their arguments, but I think they are pretty fair. None was convincing in either direction because they were all too superficial. Bengio and Mitchell actually agreed with one another more than they realised. Where they disagreed could be put down to pessimism vs optimism without any real solid evidentiary way to bring them together.
Let me lay my cards on the table. I was a firm ‘Con’ going into this debate and remain so going out of it. However, Yann LeCun made me very nervous about whether we should “trust the scientists” to work out the kinks here. That said, those concerns I have are comparatively short-term and not to do with superintelligence, so I will leave them for another time.
None of the participants stated the three main reasons I am firmly on the ‘Con’ side given the debate topic. So I will outline them here. In each case, I assume that a superintelligent machine will actually emerge; in some ways, a worst-case scenario for the ‘Pro’ side.
Why would a superintelligent machine bother killing us?
Killing us won’t necessarily be very easy. So there had better be a good reason to do it. The usual reason is that we will threaten the superintelligent machine. But if the machine is superintelligent, are we really a threat? Seems like we will be more of one if it goes to war with us; right? And even we know the costs of wiping out lesser species which is why we still have mosquitoes.
Nick Bostrom tells us that a superintelligence will be patient. Max Tegmark, in his book, says there are more resources off Earth than on it because the Universe is really big. So why would a superintelligence want a fight for a relatively small prize?
It is worse than that. What if the superintelligence does not quite know yet what resources on Earth it might need? If it is rational and patent, it makes sense for the superintelligence to invoke the precautionary principle and keep its options open. That means not trashing the Earth Skynet style. But it may mean using some other control mechanisms even if it doesn’t end up killing us all. Chances are, though, that we won’t even notice.
A superintelligent machine will face the same control issues as it builds out its own intelligence
The way a superintelligent machine is supposed to emerge outside of our control is that it has a mind of its own and then upgrades itself. Now that is kind of easier said than done. Will it want to test upgrades first? Will it be able to envisage what an upgrade will do with that extra intelligence? Will it worry that the upgrade will compete with it?
In other words, all of the problems Tegmark listed that were concerns to us about superintelligence are concerns to any intelligent machine that is at least as intelligent as us. At one extreme, this means a superintelligence will stop improving itself for fear of all this. Alternatively, it is going to want to find a way of resolving these control issues, and so slow the whole thing down. In other words, the idea that we could immediately be overrun and there is very quick movement up the IQ chart for these machines doesn’t seem plausible. This is an argument I made back in my 2018 paper on the topic.
The point here is that there is a good chance we will have some time to deal with this, and it won’t just emerge as quickly as the singularity people would have you believe.
Where are the alien superintelligent machines?
There is a famous question by Enrico Fermi asking, “if there are aliens, where is everybody?” It’s a good question. One answer put forward by economist Robin Hansen is that there is some sort of “great filter” where civilisations end up destroying themselves before leaving their own solar systems. The main candidates we think about for this are climate change, nuclear destruction, pathogens and, of course, superintelligence.
The issue is that for the first three of these four horses of the apocalypse if the aliens all destroy themselves this way, we don’t get to see evidence of them from where we are. But that is not the case with superintelligence. If these machines win, then they spread out, and they are better equipped to do so than biological beings. So I have to ask, “where are the alien superintelligent machines?”
Now we could talk about how they haven’t got here yet etc. But if superintelligence can really kill us all, then there have been billions of years of opportunities for the civilisations that are now dead to have gone in that direction. We haven’t seen them coming for our prime real estate.
Of course, we haven’t seen nice alien superintelligences either. But at least for them, they may, for the reasons discussed above, not want to make themselves obvious. Moreover, they would likely all be subject to the dark forest from the Three Body Problem, so there’s that too. The point is that the real nasty superintelligences seem to be kept in check, which surely gives us some hope that the filter isn’t really operating on that level.
Summary
This is all speculation, but I think there is logic that can give us some comfort. Some people are awake at night worrying about this. I’m not. Well, at least I’m not awake worrying about this. There are plenty more things on the list.