No ifs or bots: AI is a 21st-century nuclear threat
As technology advances at an unprecedented rate, keeping the 'black balls' safe is vital, says a prominent 'doom monger'
Like a bookie, Nick Bostrom is an odds man. But it is humanity’s form he’s interested in. And because his calculations suggest the chances of machines soon destroying our species are not impossibly long, the sentiment for which he is celebrated is disconcertingly short: “The end is nigh.”
This characterisation – of him as artificial intelligence’s leading man in the sandwich board – is one he disputes. “The perception is that I am a doom monger,” he says. “Actually, I’m in the middle between doom and utopia.”
Certainly, the Future of Humanity Institute (FHI), at the University of Oxford, which he runs, sounds upbeat enough. And looking forward this year, Bostrom is capable of ideas of jaw-dropping positivity. He says the world our children will experience by the time they are old will “boggle the mind”.
“If,” he adds without pausing, “they get old.”
For once, he’s not suggesting some cataclysm. Rather, he is casually suggesting that scientific advances may soon, one way or another, defeat the ageing process. Such radical concepts are his stock in trade. But crucially, his consideration of them is practical, not theoretical. To him, the fusing of philosophy and mathematics permits mankind to weigh the likelihood of apparently outlandish existential catastrophes or technological blessings, to list and consider them like the runners and riders at a rural racecourse, and act accordingly.
So, Bostrom does not merely evaluate the possibility that man may no longer grow old, he has taken a stake, joining the transhumanist movement that seeks to extend lifespans. Such ideas, he freely admits, used to be reserved for “sci-fi writers” and “crackpots on the internet”. Only recently, reflecting on the concept of AI-driven “superintelligent” machines, it was “a very, very fringe thing”, he notes. “Now you can have a serious conversation.”
Deadly serious. For all the uplifting sentiment, there is no doubt that, in Bostrom’s eyes, there is an invisible question mark that hovers at the end of his institute’s name. Does man even have a future?
Discouragingly, his latest paper is called “The Vulnerable World Hypothesis”. It proposes that technological advance is like a bag in which are contained some catastrophic “black balls” that would devastate our civilisation. Every time mankind dips in a hand there is a possibility of disaster. AI is one such risky technology, which he likens to the nuclear discoveries of a century ago.
Then, like now, concerned scientists struggled to alert their fellow man to the looming threat. Bostrom tells the story of Leo Szilard, the Hungarian physicist who tried to warn the Allies of the potential of nuclear weapons, and repeatedly saw his efforts come to nothing, despite recruiting Einstein to his cause. “We were in a war against Hitler, and there was this super weapon that Hitler might have been building for all we knew, and for all that time he couldn’t get them to focus on it. It took a heroic effort.”
Is Bostrom AI’s Leo Szilard? He is coy. “Szilard is one of the few people who had not just technical foresight but also strategic foresight. One of the first things he did when he saw nuclear fission was possible, and immediately saw this could be bombs, [was try] to persuade some of his nuclear physics colleagues not to publish their research. He is a person you can feel inspired by.”
It is odd to hear an academic celebrating a figure who stemmed the tide of knowledge, even for a noble purpose. But Bostrom is not advocating something similar with AI. In any case, he thinks it impossible. “The only way it could fail to be developed that seems realistic to me at all, is if we destroy ourselves through some other means before that. Which obviously would be bad.”
“It”, in this context, is the “singularity”, the moment artificial intelligence matches, then bursts past our own mortal capacity, triggering a technological acceleration of unprecedented scope. When it might happen remains open to doubt. But not, to Bostrom’s mind, if. “If you’re middle-aged today you might see it; it could go either way. But our children, well ... ” he trails off.
The significance of the moment is daunting, even for a man used to contemplating the monumental. It is a world that is approaching rapidly. Hurdles to human-level AI are falling more easily than we thought they would. As Stuart Russell, a leading British AI expert, puts it: “In the last five years, three of AI’s ‘holy grails’ have been captured: recognising objects in images, speech recognition and machine translation. By some metrics, machines have already matched or even exceeded human performance in these areas.”
Other “grails” that stand between us and the singularity remain elusive – for now. Bostrom says machines must get better at “unsupervised learning”. He explains: “Only a very small fraction of what humans know comes through explicit instruction. Sure, we go to school and read a textbook, but most of it we just live and see what’s going on. Even as a baby, we just sit up and look while our brains build up models of interactions.”
The other big prize is teaching machines to understand the content of language, something at which they are currently not that good. Teaching them to read, in other words. “Then,” says Bostrom, “they would have access to a vast library of crystallised knowledge,” otherwise known as the sum of written wisdom. They would be able to digest and query it – in a matter of hours – in a way no single human has been able to do.
To Bostrom’s mind, the speed at which other AI grails have been reached suggests that these others will not remain elusive for long. “The last few years have been impressive in terms of AI capability, and that’s one of the reasons why the future of AI is taken more seriously now. Things are moving rapidly.”
Such is the enhanced public concern this momentum has generated that Bostrom can no longer justifiably claim to play a modern Cassandra (or Szilard for that matter). The world is waking up. “Now there’s almost too much focus on the apocalyptic dimensions of this,” he says, as though he had not played a key role stirring up such fears with his bestselling book, Superintelligence. “I feel that there is sufficient concern and the focus now should be to channel that in constructive directions rather than generate more.”
Hence, as the public cottons on to the potential threats from AI, so AI researchers must pursue greater control mechanisms for the day the singularity arrives. And he says that they are doing so: “There are some really good ideas out there.” So he’s reassured? “Well, reassured is maybe too big a word for it. Ultimately, the basic challenge with AI is that we have to get it right on the first shot, so there’s a limit to how confident we can ever be.”
Once superintelligence is out of the bag, he means, it can’t be put back. If it escapes our control, there will be no recovery. So rules governing it must be perfect before it emerges. All of which sounds like a Mephistophelian pact. But the world, he suggests, is coming together to deal with the devil – and win. “We see a lot of collaboration on safety. The fact is, whoever gets there first, you want them to get there safely rather than unsafely.”
With AI, like nuclear war, if one side loses, we all lose. Yet Bostrom’s own Vulnerable World Hypothesis lists the ways in which humanity is capable of subverting such logic in catastrophic ways. For example, there is no legislating for nutters. His solutions: “global governance” and a kind of Earth-wide panopticon, surveilling every human being, analysing their actions in real time, and alerting authorities to anyone tinkering with “black ball” technologies. To you and me, it seems impossible. To Bostrom, it is logical, a future whose probability is guided by man’s need to dodge the black ball and the falling costs of cameras and software to analyse what they film.
“The technical feasibility for powerful forms of surveillance is rapidly being developed –for good and ill,” he says. A totalitarian dream? “There is a very real and grave danger of that ... but in the end, fine-grained surveillance may be the one thing that could have protected us from the world being blown up.”
The panopticon might actually see us thrive, he adds. In a world where all human actions are logged, and track records are evident, we could all drop our guards and witness greater justice in human affairs. “We could all take a deep breath, let down our shoulders and suddenly the good guys are rising to the top because people who lie and cheat get found out quickly. It seems perfectly possible.”
To Bostrom’s new kinds of society will come new kinds of people. Genetically edited people. For it will not just be machines that achieve some kind of superintelligence. “Genetic enhancement will be the first thing that works technologically, in terms of substantially increasing basic human capabilities like intelligence. That is rapidly becoming technically feasible with embryo selection within the next five to 10 years. That will increasingly start to change the game as you move beyond 2050.”
What a piece of work will man be then?
To the rest of us, even entertaining such thoughts can seem extraordinary, bordering on the crazy. To Bostrom, the “lunacy” is that not everyone thinks as he does. “Azerbaijan cultural studies just got a huge boost,” he says, ruefully, looking towards Oxford’s other academic departments. “Nothing wrong about that, but in terms of global priorities, maybe it wouldn’t be at the number one slot.”
– © The Sunday Telegraph