Skip to main content

The Need for a Preemptive Prohibition on Fully Autonomous Weapons

There is no doubt that for advanced militaries, a predominant trend in the future of warfare is the movement toward ever more autonomous weapons systems. The rapid proliferation of unmanned aerial vehicles – or drones – is the best known example of that trend. But, technical experts have pointed out that today’s drones are extremely rudimentary compared to the technology that is to come.

Some military personnel, scientists, and others believe that it is both inevitable and desirable that armed forces will one day field fully autonomous weapons systems. These are weapons systems that, once initiated, would be able to select and engage targets without any further human intervention. Unlike a drone, there would no longer be a human operator deciding what to fire at, and when to shoot. The weapon system itself would make those decisions.

While there is also no doubt that greater autonomy can have military and even humanitarian advantages, it is the belief of Human Rights Watch and many others, that full autonomy is a step too far. Fully autonomous weapons would cross a fundamental moral and ethical line by ceding life and death decisions on the battlefield to machines. It is also our assessment, based on input from technical experts, that it is highly unlikely that fully autonomous weapons would be capable of complying with the principles of International Humanitarian Law (IHL). There are also serious technical and proliferation concerns. We are convinced that these weapons would pose grave dangers to civilians – and to soldiers – in the future.

Taken together, this multitude of concerns has led to the call for a preemptive prohibition on fully autonomous weapon systems. There must always be meaningful human control over targeting and kill decisions. In fact, this would not just be a new weapon, but a new method of warfare, one that should never come into existence.

A rapidly emerging issue of great concern

International attention to the subject of fully autonomous weapons has grown rapidly since the end of 2012, as it has rocketed to the top ranks of concern in the field of humanitarian disarmament. Previously, this had been a largely unknown subject, except for a relatively small community of military personnel, scientists, ethicists, and lawyers.

Human Rights Watch and Harvard Law School’s International Human Rights Clinic released a report, “Losing Humanity: The Case Against Killer Robots”, in November 2012 that called for a preemptive prohibition on the development, production, and use of fully autonomous weapons. The report received extensive media attention and spurred the first widespread public debate on the issue.

In April 2013, an international coalition of nongovernmental organizations (NGOs) launched the Campaign to Stop Killer Robots, calling for a preemptive ban on the weapons. The Campaign, coordinated by Human Rights Watch, now consists of about 50 NGOs in about two dozen countries. It is modeled on the successful campaigns that led to international bans on antipersonnel landmines, cluster munitions, and blinding lasers.

In May 2013, UN Special Rapporteur on extrajudicial killings Christof Heyns presented a report to the Human Rights Council that echoed many of the concerns of the Campaign about the dangers of fully autonomous weapons, and called on governments to adopt national moratoria on the weapons until international discussions could be held. The report prompted two dozen nations to speak about fully autonomous weapons for the first time, all expressing the importance of the issue and the need for it to be addressed multilaterally. More nations echoed this call during the UN General Assembly debates in October 2013.

Most importantly, the more than 100 States Parties to the Convention on Conventional Weapons (CCW) agreed in November 2013 to take up the issue in 2014. The first four days of talks have recently occured in May 2014. In the diplomatic world, this is moving at lightning speed. The CCW, a forum generally dominated by the United States, Russia, and China, is known for its deliberative (i.e., slow) pace, and it often takes years of preliminary discussion before the States Parties even agree to add an issue to its agenda.

As late as October 2012, virtually no government had made a public statement about fully autonomous weapons, other than in military planning documents. Now, some four dozen nations have made statements, with all agreeing that it is an issue that must be addressed.

In February 2014, the European Parliament passed a resolution that calls for a ban on the development, production, and use of fully autonomous weapons. More than 270 prominent scientists have signed a statement calling for a ban.

In addition, the Secretary-General of the United Nations and the head of the UN Office of Disarmament Affairs, as well as the International Committee of the Red Cross, have expressed concerns about the development of fully autonomous weapons.

Still, the technology has been advancing rapidly, and diplomacy has a lot of catching up to do.

What are fully autonomous weapons?

A range of terms has been used to label these future weapons: fully autonomous weapons, fully autonomous weapons systems, autonomous weapons systems, lethal autonomous robots, lethal autonomous weapons systems, killer robots, and more. And slightly different definitions of or descriptors for these terms have been used.

Distinctions have also been made between these future weapons, and the automatic, automated, and semi-autonomous weapons that exist today. It is beyond the scope of this article to delineate all of these distinctions.

Fundamentally, a fully autonomous weapon would be an unmanned system in which the targeting and kill decisions are no longer the responsibility of a human operator, but rather the weapon itself. They could be aircraft, ground systems, or sea-based/underwater systems.

These weapons do not yet exist, but technology is moving in the direction of their development, and precursors are already in use. Among the precursors are the US’s X-47B aircraft, the UK’s Taranis aircraft, Israel’s Sentry Tech robot, and South Korea’s SGR-1 sentry robot. Those nations have other precursors as well, and other countries with advanced systems include China and Russia. Germany has developed and deployed in Afghanistan an automatic weapons defense system called the NSB Mantis, which detects and fires at incoming rockets and other weapons; the degree of human supervision in unclear.

Such precursors, which maintain a degree of human control and in some cases are not weaponized, are not the target of the Campaign to Stop Killer Robot. But they demonstrate the move toward ever-greater autonomy, and, in the context of the effort to address fully autonomous weapons, need to be examined carefully to determine how they maintain meaningful human control and provide adequate safeguards for civilian populations.

It is important to emphasize that the Campaign to “Stop Killer Robots” is not opposed to military robotics, or even necessarily the advance of autonomy in weapons systems, as both military and humanitarian advantages could be achieved if pursued and implemented properly. The Campaign’s call for a ban on development of fully autonomous weapons is not intended to impede broader research into military robotics or weapons autonomy or full autonomy in the civilian sphere. Rather, research and development activities should be banned if they are directed at technology that can only be used for fully autonomous weapons or that is explicitly intended for use in such weapons.

Some have touted the potential benefits of fully autonomous weapons, noting that they could reduce the risk to soldiers and increase the accuracy and speed of attacks. They would not be limited by pain, anger, hunger, exhaustion, or the instinct for self-defense. However, such possible advantages would be more than offset by the loss of human control. Moreover, these benefits are also possible with autonomous systems that are still under meaningful human control.

Moral and ethical objections

Perhaps the most powerful objection to fully autonomous weapons systems is moral and ethical in nature. Simply put, many feel that it is morally wrong to give machines the power to decide who lives and who dies on the battlefield. Christof Heyns, the UN Special Rapporteur on extrajudicial killings, has said, “It is an underlying assumption of most legal, moral and other codes that when the decision to take life or to subject people to other grave consequences is at stake, the decision-making power should be exercised by humans.”

Giving life and death decision-making to machines has been called the ultimate attack on human dignity, and others have noted that an action so serious in its consequences should not be left to mindless machines. The notion of allowing compassionless robots to make life and death decisions is repugnant to many. Compassion is the key check on the killing of other human beings. Fully autonomous weapons have been called unethical by their very nature, and giving machines the decision-making power to kill has been called the ultimate demoralization of war. Killer robots would constitute “losing humanity” in more ways than one.

Our experience at Human Rights Watch has shown that most people have a visceral negative reaction to the notion of fully autonomous weapons. Most find it hard to believe that such a thing would even be contemplated. There is a provision in international law that takes into account this notion of general repugnance on the part of the public: the Martens Clause, which is articulated in Additional Protocol I to the Geneva Conventions and elsewhere. Under the Martens Clause, fully autonomous weapons should comply with the “principles of humanity” and the “dictates of public conscience.” They would not appear to be able to do either.

Legal objections and accountability

Apart from the Martens Clause, it is unlikely that fully autonomous weapons could comply with basic principles of IHL, such as distinction and proportionality. Technical experts and international lawyers agree that the current state of technology would not allow for such weapons to meet the requirements of IHL. There is of course no way of predicting what technology might produce many years from now, but there are strong reasons to be skeptical about compliance with IHL in the future.

IHL requires that a belligerent distinguish between combatants and civilians. The ability to make this distinction relies not just on visual or audible signals, but also on judgment of an individual’s intentions. There seems to be little prospect that robots could be programmed to have the innately human qualities crucial to assessing an individual’s intentions. Humans can make such assessments in large part because they can relate to and thus understand other individuals as fellow humans. The robots’ inability to do so could also undermine protection for soldiers, such as those wounded or surrendering.

A robot’s lack of judgment and intuition could present even greater obstacles to compliance with the rule of proportionality, which prohibits attacks in which expected civilian harm outweighs anticipated military gain. Proportionality relies heavily on situational and contextual factors, which could change considerably with a slight alteration of the facts. The US Air Force has called it “an inherently subjective determination,” and the International Committee of the Red Cross has said it is “a question of common sense and good faith.” The judgment and intuition necessary to weigh complex facts and make subjective decisions are qualities associated with human beings, not machines.

There are serious concerns not only about fully autonomous weapons’ inability to comply with existing IHL, but also about the lack of accountability when they fail to do so. Accountability deters the commission of violations of IHL, and also dignifies victims by giving them recognition that they were wronged and satisfaction that someone was punished. Holding a human responsible for the actions of a robot that is acting autonomously could prove difficult, be it the operator, commander, programmer, or manufacturer.

Technical problems

The U.S. Department of Defense and others have cited a multitude of technical issues that would have to be overcome before fielding fully autonomous weapons. These technical obstacles, when combined with moral, ethical, legal, and proliferation concerns, give further reason to question the wisdom and appropriateness of pursuing such weapons. A November 2012 U.S. DoD directive includes a long list of possible causes of failure in autonomous weapons: human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, and unanticipated situations on the battlefield.

The DoD also writes of the need to ensure the weapons systems: “function as anticipated in realistic operational environments against adaptive adversaries;” are able to “complete engagements in a timeframe consistent with command and operator intentions and, if unable to do so, terminate engagements;” and “minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.”

Others have stressed that robot-on-robot engagements in particular are inherently unpredictable and could create unforeseeable harm to civilians.

Proliferation concerns

As militaries move toward ever-greater autonomy in weapons systems, the likelihood of advancing to full autonomy increases – unless checked now. There is the real danger that if even one nation acquires these weapons, others may feel they have to follow suit in order to defend themselves and to avoid falling behind in a robotic arms race. Even less technologically advanced nations would likely acquire the know-how once fully autonomous weapons systems were actually fielded, by getting their hands on a system and reverse-engineering, which is not as daunting a task as development from scratch.

There is also the prospect that fully autonomous weapons could be acquired by repressive regimes or non-state armed groups with little regard for the law. These weapons could be perfect tools of repression for autocrats seeking to strengthen or retain power. An abusive leader utilizing fully autonomous weapons would be free of the fear that armed forces would resist being deployed against certain targets.

Existing policies

Although the issue of fully autonomous weapons is progressing rapidly on the international stage, thus far very few countries have developed formal national policies. The only detailed policy in writing is the U.S. Department of Defense Directive of November 2012, which, for a period of up to 10 years, requires that a human being be “in the loop” when decisions are made to use lethal force, although high level Pentagon officials can waive the policy. The United Kingdom has stated that autonomous weapons will “always” be under human control.

While many nations have now spoken publicly on this topic, such statements have not constituted national policies, but rather usually more vague expressions of concern and/or interest in the subject. Some, such as Pakistan, have spoken of a prohibition on the weapons.

It is hoped that the process now underway in the Convention on Conventional Weapons will spur governments to develop rapidly their national positions.

Why a ban is the best solution

Even among those who have expressed concern about killer robots, there are some who are opposed to a preemptive and comprehensive prohibition, as called for by the Campaign to Stop Killer Robots. Some say it is too early for such a call, and that we should wait to see where the technology takes us. Some say that restrictions would be more appropriate than a ban. Some say that existing international humanitarian law will be sufficient to address the matter, perhaps with some additional guidance in the form of identifying “best practices.” Some have also argued for acquiring the weapons, but limiting their use to specific situations and missions.

The notion of a preemptive treaty is not new. The best example is the 1995 CCW Protocol IV that bans blinding laser weapons. These weapons were in prototype stage in the U.S. and China, but had never been fielded. After initial opposition from the U.S. and others, states came to agree with the ICRC’s determination that the weapons would cause unnecessary suffering and superfluous injury. The Martens Clause was also widely invoked to justify the ban, with the weapons seen as counter to the dictates of public conscience. Nations also came to recognize that their militaries would be better off if no one had the weapons than if everyone had them.

More broadly the point of a preemptive treaty is to prevent future harm. With all the dangers and concerns associated with fully autonomous weapons, it would be irresponsible to take a “wait and see” approach and only deal with the issue after the harm has already occurred.

While some rightly point out that there is no “proof” that there cannot be a technological fix to the problems of fully autonomous weapons, it is equally true there is no proof that there can be. Given the lack of scientific uncertainty that exists, and the potential benefits of a new legally binding instrument, the precautionary principle in international law is directly applicable. The principle suggests that the international community need not wait for scientific certainty, but could and should take action now. The principle holds that when there is uncertainty if an act will be harmful, the party committing the act bears the burden of proof the act will not be harmful. It is not necessary to resolve scientific uncertainty in order for preventive measures to be warranted. Today’s scientific uncertainty, combined with the potential threat to the civilian population from fully autonomous weapons, provides ample reason to undertake preventive measures in the form of an absolute ban.

Fully autonomous weapons represent a new category of weapons that could change the way wars are fought and pose serious risks to civilians. As such, they demand new, specific law that clarifies and strengthens existing IHL. There are numerous examples of weapons treaties designed to strengthen IHL, and these generally come about because the weapons are objectionable by their very nature, not just because of misuse. This would apply to cluster munitions, antipersonnel mines, blinding lasers, chemical weapons, and biological weapons.

A specific treaty banning a weapon is also the best way to stigmatize the weapon. Experience has shown that stigmatization has a powerful effect even on those who do not formally joined the treaty, inducing them to comply with the key provisions such as
no use or production, or risk international condemnation.

An absolute prohibition would maximize protections for civilians from these weapons. It would be more comprehensive than regulations, eliminate the need for case-by-case determinations of legality of attack, and make it easier to standardize rules across countries. If regulations restricted use to certain locations or to specific purposes, after the weapons entered into national arsenals, countries would likely be tempted to use them in other, possibly inappropriate, ways during the heat of battle or in dire circumstances.

A comprehensive ban treaty would also more effectively deal with proliferation concerns, by prohibiting development and production as well as use (as IHL only addresses use). And, if a prohibition is in place, there is no reason to be concerned about accountability.

Conclusion

Nations urgently need to develop national policies on fully autonomous weapons if they are to engage in substantive deliberations on this emerging topic of international concern. If countries are unprepared to embrace the notion of a comprehensive prohibition immediately, they should institute national moratoria while multilateral discussions are ongoing, as recommended by the UN Special Rapporteur on extrajudicial killings.

The key is to embrace the concept that there should always be meaningful human control of the targeting and kill decisions in any individual attack on other humans. The determination of the meaning and nature of “meaningful human control” should be undertaken on the national level and in multilateral discussions.

The development, production, and use of fully autonomous weapons should be prohibited in the near future, in order to protect civilians during armed combat and to preserve human dignity. If the ban is not embraced soon, it will be too late.

Summary

Stephen Goose

Steve Goose is executive director of the Arms Division of Human Rights Watch.   He played an instrumental role in bringing about the international treaties banning cluster munitions (2008), antipersonnel landmines (1997), and blinding lasers (1995).  He serves as the chair of both the International Campaign to Ban Landmines (co-recipient of the 1997 Nobel Peace Prize) and the Cluster Munition Coalition. Goose and Human Rights Watch are leading the new global Campaign to Stop Killer Robots, which calls for a pre-emptive prohibition on fully autonomous weapons.

gooses@hrw.org


Download PDF here

All articles in this issue

Lethal Autonomous Systems and the Plight of the Non-combatant
Ronald C. Arkin
The Need for a Preemptive Prohibition on Fully Autonomous Weapons
Stephen Goose
Of Men and Machines. What does the Robotization of the Military Mean from an Ethical Perspective?
Bernhard Koch
Remote-Controlled Aerial Vehicles – Made-to-Measure Effectiveness for Better Protection of Our Soldiers on Missions
Karl Müllner
Armed Drones: Legal Issues from an International Law Perspective
Stefan Oeter
Killing by Drones: The Problematic Practice of U.S. Drone Warfare
Peter Rudolf
Drones, Robots and the Ethics of War
Daniel Statman
My new Fellow Soldier - Corporal Robot?
Jörg Wellbrink

Specials

Harald J. Freyberger Michael D. Matthews