Monday, December 13, 2010

Four Classes of Robotic Armaggedon

One estimate puts the number of robots dwelling among us at 11 million (as of mid 2010). If computers serve as guide, we can expect that number to increase rapidly as the years go by. Were the number of robots to increase at a rate of 10% a year, we'd have more robots than the present day human population by 2080. Self-replicating robots could cut that time frame dramatically.

Knowing that 11 million automated beings hide in factories and homes across this globe makes me a little worried. That's not an inconsequential army. Granted, almost all of those bots are highly specialized factory works who couldn't lift a gun if you wrote the software for them, but are we entering the era of the robot, and consequently, the threat of robot uprisings? When should we really start worrying, and what should we do if a mechanical judgment day comes?

The first step is, of course, precaution. The best way to avoid a robotic uprising is not training them to kill. We may have missed the boat on that one. At least the militaries of the world realize the importance of having a human making the shoot/don't shoot decision. Still, it's just a single feature requiring malfunction or tampering. The lack of strategic coordination among the robots also mitigates the threat, thus requiring a programming evil villain in the loop at present. You should be safe in the near term.

In the long term, who's to say? Of course, not all uprisings are created equal. The first question is if the robots have a shot at winning. Without access to heavy weaponry, extensive infrastructure support, and all-terrain movement, the uprising should turn out to be an inconvenience, not an Armageddon. What if we're not so lucky? I'd still divide the outcome into a few categories: bad, terrible, really-really-terrible, and not-all-bad. Let's hope for the last.

Robotic Uprising, the bad kind
This would be a low-intelligence, but highly effective attack by the robots. The most probable cause is haywire nuclear missile control, but a roboticized military turning against us isn't inconceivable. It's bad, because civilization gets destroyed. It's not terrible, because a clock starts ticking as soon as the attack begins. Robots break down. Nuclear arms are used up. As long as some humans living in remote locations make it a year or two, humanity can be rebooted. Even if we don't make it, the life left behind will have a chance to evolve its way to intelligence again. The most vicious nuclear onslaught is likely to leave life behind, if only in the depths of the sea. If surviving life evolves intelligence quickly, they'll even have the ruins of our civilization to learn from.

Robotic Uprising, the terrible kind
Much worse would be self-perpetuating robots with a dislike for biological sentience. Self repairing robots using solar energy could see to it that no new life takes hold as long as the sun keeps shining. Here, a small band of humans couldn't hold out past the end of days to see a new dawn: once the automatons take over, it's over. This scenario requires a much more sophisticated robotic ecosystem. The lesson, though, is that once we have self-perpetuating robots, we're moving into dangerous territory.

Robotic Uprising, Really-Really-Terrible kind
The last scenario had all life, or at least all life more advanced than a fish, being eradicated from the Earth forever. Is there really a more dismal scenario? Yup. I see it playing out much like the terrible version, with one crucial difference: robotic seeder ships. While we're still all buddy-buddy with the robots, we start constructing robotic settlement ships. They're built to fly to distant star systems, maybe construct a city and incubate a few humans when they arrive. Then they mine the local system, and set about building new probes to colonize new worlds. We'll explore and settle the stars!

Until the robots decide they don't like biological intelligence anymore. Then the seeder ships are re-purposed to seek it out and destroy it. In the "terrible scenario" we blotted out life on Earth, but we're just one of a trillion or more planets. In this scenario we set in motion an orchestrated effort to destroy all life anywhere. Whoops. Truly, a disaster of the greatest magnitude. If such a scenario seems at all possible, I hope we'll stockpile a planet destroying supply of fission, or anti-matter, bombs. Better that the Earth be turned into dust then risk eradicating life everywhere. Of course, once the seeder ships are out there, it might already be too late...

Robotic Uprisings: not all bad
I saved the least bad for last, so as to not end on the down note of universal extinction. I believe that the proper way to evaluate the consequences of any disaster are in terms of life, and in particular, intelligent life. The destruction of, say, a nation would be a tragedy, certainly. But in the grand scheme of things its a small blip. The miracle of Earth is first, life, and second, intelligence. The degree to which robots muck that up is the degree to which the uprising is bad, or really really really bad.

And the thing that makes life special is evolution. The biosphere is constantly changing, and usually improving. We went from microbes blindly reacting to environmental cues, through varying degrees of braininess, to the human mind, capable of unlocking cosmological secrets, capable of creative invention. Someday we may build a machine that's at least as capable of evolving as ourselves, where generation after self-replicating generation of machine is more intelligent, more creative, better than the last. Such a system could very well leave humanity behind, achieving intellect unlike anything we currently imagine. If such an improving, super-human creation were to turn against humanity, it'd just be Darwinism, in a way. Competition among species is as old as time, and if we create a truly superior specie, it might out-compete us in the end. I see nothing wrong with fighting such a creature: we need not go gently into the dark night. But, unlikely the other scenarios, mutual destruction would no longer be necessary. As long as life, be it biological or mechanical, is improving itself, things are well enough in the world. If we achieve any degree of artificial intelligence, I suspect we'll eventually reach this mastery of the art. The important thing, then, is seeing to it that we don't let the robots get out of hand, at least until that time.

No comments:

Post a Comment