Isaac Asimov, in a short story, proposed “three laws of robotics” to restrain the behavior of artificial intelligence. Today, as robots move from the realm of science fiction to reality, ethics for robots seems less like a fanciful hypothetical scenario and more like a pressing concern for the near future.

Yale ethicist Wendell Wallach would be happy with just one law, and a superficially simple one at that: “Machines must not independently make choices or initiate actions that intentionally kill humans.”

Is it too much to ask for our robots not to kill us?

For the past decade, Wallach has been campaigning for an international ban on lethal autonomous weapons systems, known colloquially as killer robots, and recently wrote an article in the Communications of the ACM journal making his case to the engineering community. Wallach will speak to the Princeton chapter of the ACM/IEEE on Thursday, November 16, at 8 p.m. at the computer science building Room CS105 at Princeton University. For more information, visit, e-mail, or call 908-285-1066.

Wallach warns that if we don’t do something about autonomous weapons soon, wars could spiral beyond the control of humans.

“The worst case scenario is that you have high-powered munitions that can release weaponry autonomously and they do so either in an accidental way, or they do that in some intentional way and it escalates hostilities beyond meaningful human control or it even starts a new war that wouldn’t have started otherwise,” he says.

The consequences could be even worse should countries turn their nuclear arsenals over to autonomous systems. In pop culture, this scenario plays out in the Terminator movies, where the sentient AI system SKYNET nukes humanity and then hunts down the survivors via Arnold Schwarzenegger. Computers are nowhere near achieving consciousness, but it doesn’t take SKYNET-like sentience to pull a trigger. “I think science fiction, even though it could be highly speculative and about systems that may not appear within our lifetimes, nevertheless can speak to a very deep intuition that this is not a road that humanity wants to go down,” Wallach says.

The trouble is, the people planning to bring killer robots into existence may have the upper hand over those trying to stop them. For years the U.S. military has been planning its research and development around a concept known as the “third offset,” which would involve the mass production of drones of all kinds.

The idea is to use self-piloted drone technology to ensure military dominance in the decades to come. In this way of viewing military advances, the “first offset” that gave the U.S. an insurmountable edge over its opponents was building a nuclear arsenal, the ultimate trump card.

But by the 1970s, the Soviets had more nukes and more powerful conventional forces than the U.S., and planners feared the Russians could steamroll through Europe with ease. The “second offset” in the following decade, which allowed the U.S. to regain the upper hand, was the development of advanced electronics and guided missiles that were capable of annihilating the threatening Russian tank formations without resorting to nuclear bombing.

But the technologies that gave the U.S. an edge for decades, such as smart bombs and night vision gear, have spread to other countries, eroding the Americans’ advantage. The Pentagons’ strategy for staying ahead can be seen in weapons tests and Powerpoint presentations that portend the future of warfare.

An ominous example was shown on 60 Minutes earlier this year in a test at the Naval Air Weapons Station in China Lake, in which three fighter jets dropped not bombs or missiles, but 103 tiny biplane drones, each about the size of a seagull. The Perdix drones, with an eerie wailing noise, formed up and then set out to perform four different missions, some hovering over targets and others flying around in a circle in the sky, all without anyone piloting them. The drones had only been given objectives by their human masters and communicated among one another to achieve them.

To see a video of this test, which is available on YouTube, is to understand why Wallach writes in the Communcations of the ACM: “Small low-cost drone swarms could turn battlefields into zones unfit for humans. The pace of warfare could escalate beyond meaningful human control.”

LAWS of War: Lethal autonomous weapons (LAWs) already exist and are deployed by militaries around the world. By some definitions, they have been with us for centuries. Land mines, used by militaries at least since the 1700s and mass-produced since World War I, could be considered autonomous weapons because once buried they will kill indiscriminately without any further intervention by a person. Today, 122 countries have signed a land mine ban.

Other autonomous weapons more clearly fit the “killer robot” image. South Korea has deployed an automatic sentry gun, Samsung SGR-A1, to guard the North Korean border. The stationary machinegun turret has its own camera and can detect, track, and fire upon intruders.

Many countries use defensive weapons that fire without asking human permission first. For example, American warships carry Phalanx defense guns that automatically shoot down incoming missiles. These kinds of weapons have killed people before. In 2007 a South African antiaircraft gun malfunctioned and mowed down nine soldiers.

Despite such accidents, Wallach does not seek to restrict these kinds of defensive systems. “I’m less worried about individuals being killed accidentally, as tragic as that is,” Wallach says. “Unfortunately there is nothing nice about warfare, and people are killed accidentally all the time.”

Offensive weapons such as drones capable of attacking ground targets are of more concern. Drones currently in use are not true autonomous weapons because they do not act on their own, but are piloted remotely. Whenever a drone fires a missile, it is a human being making the decision to kill. In Wallach’s circles, this kind of system is called “Human-In-The-Loop.” It’s when humans are no longer in the loop that things get really dangerous, he says. And taking humans out of the loop is tempting for militaries because it could provide a great advantage in any conflict: if an autonomous system gets into a shootout with a person, the automated system with its speed-of-light reflexes is going to win.

“It’s because of high-powered munitions on the battlefield and the unpredictability of such systems that I argue that there has to be some sort of ban,” Wallach says.

Wallach has been interested in ethics for as long as he can remember. “Politics and ethics were always a conversation in our family,” he says. “My parents were world federalists in the ’50s who believed in world government. Our ancestry was Jewish, and they were forced to leave Germany, a country they loved.” Wallach was raised in Connecticut, where his father was a doctor.

For Wallach, being involved in a campaign for social change is nothing new. “I’ve been an activist in one form or another my whole life,” he says. In high school, he was active in the Civil Rights movement, and protested against the Vietnam War while in college. He became fascinated by technology in ethics and interested in the philosophical issues that came up in robotics. “I began thinking whether robots could be effective moral decisions makers.” He graduated from Wesleyan with a degree in social studies and studied at Harvard Divinity School before graduating from Harvard Graduate school with a master’s of education. Since 2005 he has been the chair of the Yale Interdisciplinary Center for Bioethics.

Wallach has written three highly influential books on bioethics: Emerging Technologies, Moral Machines, and A Dangerous Master: How to Keep Technology From Slipping Beyond Our Control. His arguments have gained some traction in the general public and even within the military.

“Research shows that active military personnel and even retired military officers think lethal autonomous weapons are a bad idea because they undermine robust command and control,” he says.

“Soldiers think they take any dignity out of being a soldier. But unfortunately, this argument is not about public opinion, and public opinion has not been voiced strongly enough. Strategic planners who see this as the next stage of the development of weaponry have much more force than all the active military combined. This is an argument about whether strategic planners think it is necessary, and whether politicians would go along with them. And politicians who don’t understand the arguments would go along with them largely to dismiss their discomfort with lethal autonomy, and pretending that it’s a largely manufactured by science-fiction, or predicated on sci-fi systems that don’t exist today … it’s going to take a pretty active public campaign to get them actually banned,” Wallach says.

It’s because of this debate that Wallach prefers not to use the term “killer robots” (although he has used it himself in the past). That’s not because it is inaccurate, but because it annoys the people he is trying to persuade to refrain from building them. “It just irritates the arms control negotiators,” he says.

Even if political leaders made the decision to ban lethal autonomous weapons, the details of such a ban would be incredibly complex. Firstly, there is the near impossibility of telling an autonomous weapon from a human-in-the-loop one. As Wallach points out, the difference between the two could be nothing more than a few lines of code.

Nevertheless, Wallach hopes that some sort of treaty similar to the land mine ban could also be created to forestall the deployment of weapons that kill on their own initiative.

While Wallach believes it would be possible to ban them after the fact, especially if they were to cause an incident that drew a lot of attention to the issue, he thinks it would be better to stop the problem before it begins. Land mines were eventually banned more than 100 years after their invention — but not before they killed millions.

“The reality is if we don’t act in the next few years, it will already be too late,” he says.

Facebook Comments