It used to be that the future was so bright you had to wear shades. This was the case for maybe a couple of decades (the final two, as it happens, of the 20th century), but then, unfortunately, the sun went in and it shows no sign of returning. The earlier, pre-shades future was very different from the current, post-shades one, but they’re both apocalyptic in nature.
Back in the Cold War era, it was the possible destruction of all known life on earth resulting from the use of nuclear weapons. Today, it is the possible elimination of all significant human agency from the bio- and techno-spheres resulting from the exponential proliferation of artificial intelligence. The nuclear threat is still very real, of course, as is the threat from man-made climate change, but there’s no question that the new apocalyptic kid on the block is Artificial Intelligence (AI).
As always, these visions of how our future, our ultimate fate as humans, may play out come to us in two widely varied but mutually dependent forms: scientific or philosophical speculation on the one hand, and art on the other.
When it comes to nuclear weapons, I’ve only ever read one book in the first category: Jonathan Schell’s The Fate of the Earth, an angry, passionate and profound wake-up call for humanity. It was published in 1982 and covers all the bases: hard facts, moral outrage, and a thin, poetic form of hope. You really don’t need to read anything else.
As for the second category, where do you begin? From Neville Shute’s On the Beach, J.G. Ballard’s The Drowned World, and Russell Hoban’s Ridley Walker, right up to the current vogue for post-apocalyptic dystopias brought about by other (often obliquely metaphorical) means — viruses, vampires, zombies — we have a rich, slowly unfolding picture of how we imagine humanity might fare in the wake of its own near-annihilation. These dark visions feed into and shape our anxieties, but they also make us stop and think, and may even, arguably, bring us to our senses.
And when it comes to AI, in this second category, there is quite a bit of overlap. The machines rise, they take over, we humans fight back — that’s the big story, and what all the different versions of it have in common is that something essential in us survives. Each encounter with an AI is really an investigation into what it means to be human — whether it be Maria in Metropolis, HAL in 2001: A Space Odyssey, the replicants in Blade Runner, the Series 800 in The Terminator, Samantha in Her, or Ava in Ex Machina. And if these stories aren’t about that, they tend to be about how the AIs themselves want to be like us — how they envy our chaotic bullshit, our instincts and appetites, and how they grope toward some form of human-like transcendence.
The only problem here is that these specific visions of how our future may play out owe more to Walt Disney, the 20th century’s great anthropomorphizer, than they do to any scientific or philosophical speculations on AI currently out there. And this is because the one aspect of AI that these speculations seem to have little or no interest in is robots — cute ones, menacing ones . . . domestic, militarized, it doesn’t matter. It’s simply not what the scientists and philosophers are concerned about. For nearly a century, we’ve distracted (and, frankly, flattered) ourselves with the notion that we can reproduce our species in mechanical form and that the resulting machines will one day, in turn, aspire to be real. Our robots have been imaginary friends, surrogates, slaves, sex symbols, and destroyers — from Robby the Robot to Wall-E, from Ash to Pris, from the the T-1000 to the Cylon — and they are all made, to one Disneyfied degree or another, in our own image. They do make for intriguing, compelling stories, that’s for sure—but they’re not what the seers have recently started seeing.
This is because up to now, it has all been very simple. AI equals robot, or android — that’s the platform, the delivery system, the apparently artificial part. But what if the real delivery system turns out to be the much clunkier-sounding (and altogether less sexy, less fetishizable) Internet of Things? As for the intelligence part, that’s always assumed to be of the emotional variety, at least in prospect, but what if it turns out to have some version of Asperger’s Syndrome, to be just concerned with patterns and systems, to be supremely disinterested, and uninterested, when it comes to the doings and welfare of its — for now — human overlords?
If that’s the case, then Uncle Walt will have to step aside and make way for the anti-Walt, i.e., Nick Bostrom, Oxford University philosopher, founder of the Future of Humanity Institute, and author of 2014’s Superintelligence: Paths, Dangers, Strategies. Bostrom’s fascinating book is dense and very complex (for an easier-to-digest account of the AI question in general, check out Tim Urban’s brilliant posts on the subject on his website Wait But Why), but it does contain an extremely important warning — achieving human-level artificial intelligence almost certainly then means achieving greater-than-human-level artificial intelligence, and once that happens, all bets are off. Because it’s not just that “they” will be taking over, it’s that with the exponential growth of computational power applied to molecular nanotechnology, the very fabric of reality as we know it could be transformed out of all recognition. Dialing this back a bit from eleven, it could merely mean that our now AI overlords would be as oblivious to us and our concerns as we might currently be to those of an ant colony swarming below us on the sidewalk. There would be no evil intention in any of this, or malice, or moral awareness — just the neutral enactment of whatever algorithm set the damn thing running in the first place.
Now, by some accounts, this stuff is just around the corner — 2045, or 2060, sooner maybe, or later, no one can be sure. But while the danger here is real, as real as the danger of nuclear annihilation or climate catastrophe, we seem to be stumbling into it blindly, almost willingly. There is no equivalent level of fear or moral outrage. Proliferation of iPhones clearly isn’t as scary as proliferation of nuclear warheads, but the existential risk—in theory, at any rate—appears to be the same.
So if this is the case, what kind of stories will we be telling ourselves moving (as they say) forward — especially if by forward we mean only 30 or 40 years? It’s not a great perspective from which to be telling stories, or even to be thinking of doing so. It used to be that our future stretched out ahead of us: you could think of a hundred years from now, or two hundred, you could see as far as when Star Trek started, to that glorious time when they had dispensed with the idea of money and dressed in chartreuse and azure tunics. You could even entertain a flicker of anxiety about the fact that in six billion years’ time our sun would explode and where would we be then? Today, by contrast, we’re stuck with this spoil-sport technology that threatens to turn us into gray goo and conceivably within the lifespan of the youngest person currently alive on the planet. We’re facing a brick wall, so here’s a key question: Is what might happen on the other side of it even imaginable? Is it translatable into art? In some form? We imagined ourselves just about surviving the old apocalypse. But after this new one, we may not even be recognizable as us. How do we tell that story? How do we make ourselves stop for a moment and think? Is there a version of this story, or a way of telling it, that will bring us to our senses?
Needless to say, I hope that one of the more benign AI scenarios — of which there are plenty — comes to pass instead. Because frankly, this one seems like a serious challenge. But then I suppose if there’s one thing we humans like — one thing that keeps us human — it’s a serious challenge.