More on the robot apocalypse
I finally read Robopocalypse: A Novel by Daniel H. Wilson, and was surprised at how interesting it was; of all the possible apocalyptic scenarios, a robot apocalypse has always been the one I thought least likely to happen.
That’s changed, and mostly because of the comments I received from Dr. V. Scott Gordon, Professor of Computer Science at Sac State. Gordon’s research is in artificial intelligence, neural networks and evolutionary computation.
Yeah, that’s right. Computers evolve.
I asked Gordon to comment on how likely a robot apocalypse is—and explain why or why not—for my story, “13 ways Sacramento bites it.”
What he wrote in response frightened me far more than fires, floods, volcanoes, zombies and a returning Jesus. That’s why I’m going to share it with you, since I hate to be scared all by myself.
The idea of a “robot apocalypse” would have been purely the stuff of B-grade sci-fi movies twenty years ago. But it is becoming more and more interesting with each passing year. And yes, by increasingly interesting, I mean increasingly plausible, in one form or another.At a rather simplistic level, consider our reliance on the world wide web. If the web were to suddenly shut down, the effects would be nothing short of catastrophic. Much of commerce would be halted, as a huge number of business transfers and financial activity operates over the internet. Most financial systems have no backup plan for what to do if the web were to become unavailable.The good news, of course, is that it is pretty unlikely for any but a small portion of the web to go down at once, since it is so highly distributed. But computer viruses are becoming increasingly sophisticated, and some have even been written to evolve and adapt like actual organic viruses. It is conceivable that an adaptive virus could some day become sophisticated enough to bring down a large portion of the web. The “code red” worm back in 2001, for example, infected some 350,000 servers and raised questions about the net’s vulnerabilities. Such a virus might be built intentionally by a terrorist group, or it could even evolve on its own (unintentionally) and become increasingly autonomous. I think that would qualify as a sort of “robot apocalypse,” or at least a “bot apocalypse.”
At a more dramatic level, there is the concept of “singularity”. The term, coined originally by the science fiction writer Vernor Vinge, refers to the rate at which intelligent technology is advancing. The idea is that if machine intelligence ever exceeds human intelligence, then it follows that those machines could repeat the process and make even better machines, and that this intelligence explosion would quickly render us irrelevant, with unpredictable and scary possibilities. The famous inventor and futurist Ray Kurzweil has created an entire institution—Singularity University—around this idea. The shocking part is that some scientists no longer consider it soley within the realm of science fiction anymore. Kurzweil’s prediction is that it could happen as early as the year 2045, which he arrived at by charting the advances in computing power of our modern machines, and extrapolating forward to see at what point they could exceed the computing power of our own human brain.Should the “singularity” actually occur, it is difficult to say in what form it would take. Perhaps the super-intelligent computers would enslave us and force us to build devices that it could use to control us, as in the 1966 movie Colossus. Or perhaps real robots would start to appear that in turn would build better robots, like in the movie The Terminator.How likely is all this? Well, I’m just a human, so I don’t know.