CALLING DR. ASIMOV!
by Robert Silverberg
|Science fiction has given the world a legion of baaaad robots over the years. Back in the Gernsbackian antiquity of our genre, killer-robot stories were a pulp staple. Abner J. Gelula’s “Automaton” of 1931 portrayed a lustful robot that makes erotic overtures to its creator’s daughter and has to be destroyed. (“With an unusual alacrity, the Iron Man reached out its powerful appendages and held both Martin and the girl in vise-like grips against its metal body.”) In Harl Vincent’s “Rex” (1934), a robot seeks to take over the world. (“Reason told him that the first step to that end must be to take control of mankind and its purposeless affairs. He set the workshop humming in the construction of eleven super-robots, one to be sent to each of the North American cities to organize the lesser robots and take control of the government.”)
Though Isaac Asimov’s robots were supposedly designed to be harmless, there’s a dangerously disobedient one in his “Little Lost Robot” (1947). (“We have one Nestor that’s definitely unbalanced, eleven more that are potentially so, and sixty-two normal robots that are being subjected to an unbalanced enviroment.”) Clifford D. Simak’s “Skirmish” (1950) shows the Earth invaded by extraterrestrial robots that are able to awaken consciousness in our small machines, sewing machines and typewriters and the like, so that we find ourselves surrounded by metallic enemies on all sides. (“The end could be predicted, with relentless, patient machines tracking down and killing the last of humankind, wiping out the race.”) Philip K. Dick’s “The Defenders” (1953) portrays the world made uninhabitable by a Soviet-American war, with the human survivors living in subterranean sanctuaries and the surface occupied only by military robots, who refuse to let us come back up after detente is reached by the warring nations. (“Now the end is in sight,” one of the robots declares: “a world without war.”) Harry Harrison’s “The War with the Robots” (1962) offers a variation on the same theme, with the robots on the surface carrying on war against each other while confining humans to the underground refuges to which they have fled.
There’s more, much more. The monstrous, terrifying computer of Harlan Ellison’s “I Have No Mouth and I Must Scream” (1967). (“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE.”) The immensely intelligent telepathic computer of A.E. van Vogt’s “Fulfillment” (1951). (“I shall be, not a slave, but a partner with Man.”) The aerial spy-robots of Robert Sheckley’s “Watchbird” (1953), which are so efficient that an even more powerful mechanical creature has to be sent aloft to prey on them, with ultimately dire consequences for its makers. (“The armored murder machine had learned a lot in a few days. Its sole function was to kill.”) I had a few shots at the theme myself, as my story “The Iron Chancellor” (1958), in which a slightly overweight family brings in a robot to enforce dietary restrictions; the program malfunctions and they discover that they are slowly being starved to death.
Even when a science fictional robot is portrayed as being sympathetic, as in Eando Binder’s 1938 “I, Robot” (not to be confused with the much later Asimov book of the same name, which got that title over Isaac’s objections), the robot sometimes does unintentional damage, as when Binder’s robot tries to rescue a drowning girl. (“I managed to grasp one of her arms and pull her up. I could feel the bones of her thin little wrist crack. I had forgotten my strength.”)
Well, all that’s just science fiction. There weren’t any robots when those stories were written, and, since all stories need drama and suspense, the menace of the robot served nicely as a scary plot device. But we have lived on into an era when robots are all over the place—not the clanking two-legged mechanical critters of so many SF tales, but robots all the same, machines that perform a host of functions that once were handled by human beings. Robots sort the mail, defuse bombs, trundle down office hallways carrying packages, bustle around in houses gobbling up dust and terrifying house cats. Robot planes drop missiles on terrorists in Afghanistan and Pakistan. New uses for robots emerge every day. They are being employed increasingly in delicate manufacturing operations, in surgery, in all sorts of areas where more-than-human keenness of eye and steadiness of hand are required.
Indeed we have more robots among us already than we tend to realize, and more are to come. We also have a great many lawyers in our midst. And the two groups are shortly going to be on a collision course. A warning about the legal problems that widespread use of robots will pose has come from M. Ryan Calo, a residential fellow at the Stanford Center for Internet and Society:
“These are devices that don’t have a predetermined usage; they’re not toasters. There’s a growing concern now about robot ethics.”
Robot ethics! Oh, mother, I really have lived into the twenty-first century!
Legal cases involving robots have been turning up for more than a decade. Pacific Bell, which is now part of AT&T, used Zippy, a robot, to carry mail around in one of its Northern California buildings. In 1997 a woman sued, claiming that Zippy had run over her foot and then knocked her into a filing cabinet. She collected an undisclosed amount.
But that’s an old-fashioned kind of accident—a blundering machine slamming into someone. Consider a more complex situation that’s probably just around the corner: malicious kids hack into a house that uses a robot cleaning system and reprogram the robot to smash dishes and break furniture. If the hackers are caught and sued, but turn out not to have any assets, isn’t it likely that the lawyers will go after the programmer who designed it or the manufacturer who built it? In our society, the liability concept is upwardly mobile, searching always for the deepest pocket.
It isn’t even necessary to conjure up malicious hackers. Robots can make trouble all by themselves. “These are machines that may not be intelligent, but are increasingly autonomous,” says another Stanford scholar, Paul Saffo. “They do things without being told.” Suppose a robot designed to grade exams or term papers goes wonky and erases all the students’ work: who covers the cost of re-testing everybody? What if a robot air controller blows a circuit and decides that down is up? Or a robot surgeon loses track of which is the appendix and which is the pancreas? Et cetera. Even the best-designed robot could go bad in this imperfect world of ours, and Somebody Will Have To Pay For It. And so we stand at the threshold of a wonderful new era for the liability lawyers.
Stanford’s Calo sees potential problems for the United States because of our highly evolved litigation system. With robot lawsuits running to the millions or even billions cropping up everywhere, will vulnerable corporations here want to risk investing large sums in robot development? It may be more prudent for them to do their work overseas. “If other countries have a higher bar for litigation, they’ll leapfrog right over us,” Calo says. That may be true; but, as I will note in a moment, the threat of robot-liability litigation may not be such a terrible thing. I’d love to know what Dr. Asimov would say about that. No science fiction writer ever gave more thought to the dangers robots might present than Isaac Asimov. Isaac, one of the gentlest of men, hated the pulp cliché of the menacing robot—what he called the “Frankenstein complex.” (Man creates robot, robot kills man.) In the very first of his many robot stories (“Robbie,” 1940) he set out to depict a robot that, he said, “was wisely used, that was not dangerous, and that did the job it was supposed to do.” Over the next few years he gradually evolved what he called the Three Laws of Robotics, first explicitly stated in a 1942 story, laws which he argued would need to be programmed into all robots in order to make them acceptable to the public: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. All well and good, and modern-day robot designers are quite aware of them. (And often name their companies or their devices after counterparts to be found in Asimov stories, or even after Isaac himself.) But Isaac was a shrewd storyteller, who knew that kindly robots did not make for interesting plot material, and over more than thirty years he wrote a long string of robot stories and novels that pointed out all sorts of ways that the Three Laws could be evaded or circumvented by a sufficiently advanced robot. 1947’s “Little Lost Robot” is a good example: the need arises to build a robot that is not fully governed by the First Law, and the cunning robot then manages to find ways around the other two laws until it is at last outsmarted by the formidable roboticist Susan Calvin. In “Lenny” (1957), Isaac shows how a robot that literally does not know its own strength could violate the First Law despite all its programming. He wrote many other ingenious stories that tested the robotic laws in various ways.
In a 1947 essay on his own Three Laws, he wrote, “Are safeguards sufficient? Consider the effort that is put into making the automobile safe—yet automobiles still kill fifty thousand Americans a year. Consider the effort that is put into making banks secure—yet there are still bank robberies in a steady drumroll. Consider the effort that is put into making computer programs secure—yet there is the growing danger of computer fraud.”
He assumed—correctly, so far—that when the time came to build robots, the designers would build the Three Laws into them. That has turned out to be true, more or less. But the robots we have today are relatively simple devices. As they grow more complex, as they certainly will, it may prove necessary or at least desirable to cut some corners where the Three Laws are concerned. The era that has given us vast Ponzi schemes and strange mortgage hijinks and all too many other examples of moral slippage will surely give us non-Asimovian robots, too, exposing us to all the risks that those scary old pulp stories liked to tell us about. And then the trial lawyers will pounce. I’m not fond of a lot of the legal gymnastics that go on in courtrooms today, but it may be that the threat of those lawsuits will actually be a useful governing factor as the era of robots unfolds. All machines, from thermostats on up, need a governing device to control their functions. The avid liability lawyers may well serve that purpose for the robots. Thus it may be that a Fourth Law of Robotics is needed: Never fail to make use of the Three Laws—or you’ll pay a high price for it in court.
Subscriptions If you enjoyed this sample and want to read more, Asimov's Science Fiction offers you another way to subscribe to our print magazine. We have a secure server which will allow you to order a subscription online. There, you can order a subscription by providing us with your name, address and credit card information.
"Calling Dr. Asimov!"
by Robert Silverberg , copyright © 2010 with permission of the author.