Skip to main content

My new Fellow Soldier - Corporal Robot?

Is the Terminator ready to take action in the battles of tomorrow? In the future, will autonomous machines conduct armed conflicts against people, as portrayed in the film “The Matrix”? Is this all still science fiction, or will it soon become reality?

Lurid headlines such as “Can robots replace soldiers?” or “Robots are cheaper than soldiers” are increasingly common. Meanwhile, human rights organizations such as Human Rights Watch are calling on governments to take prompt action to ban killer robots.

It is much rarer for the public debate to focus on other – military and civilian – applications for robots, such as rescuing the wounded or using robots in medical care. Thus there is an urgent need to bring some objectivity to the discussion.

The German Bundeswehr has conducted a scientific analysis of the future of robotics technology and its possible impacts on armed forces. A study produced by the Bundeswehr planning office (Planungsamt der Bundeswehr) in Berlin examined the latest and anticipated future developments in robotics research, artificial intelligence and nanotechnology, as well as their potential impacts on aspects of security policy and on the military. The planning office primarily looked at possible developments over the next five to ten years. Within this time frame, the combat robots that have so far been discussed in such populist terms do not yet play any role.

According to the study, for technological reasons it is doubtful whether it is even feasible to develop autonomous robots that have the functionalities of soldiers in combat. The study therefore recommends that research should be limited to robotic systems that have a relatively small range of functions for supporting soldiers, rather than allocating research funding to the development of fully autonomous humanoid robots. This does not imply any change of direction for the Bundeswehr, since it rules out the deployment of systems that make an autonomous decision to use weapons against people solely on the basis of computer or machine logic.

Technically feasible, questionable for military use

The deployment of soldiers does not solely comprise the use of weapons against potential opponents, but rather it involves a significantly more extensive and varied range of tasks. For example, these include carrying out patrols past children at play, or assessing passing vehicles which may present a danger. The Bundeswehr already talks about “strategic corporals”, whose actions can have major impacts in a conflict situation.

The future analysis section at the Bundeswehr planning office expects a further increase in complexity in future crisis situations where an alliance of forces is deployed. For the Bundeswehr, these situations will continue to range from humanitarian disaster relief missions to possible shorter stabilization operations and combat scenarios. In future operations, soldiers will be faced with increasingly complex environments and missions that place them under even greater pressure. Handling complex situations and adapting flexibly to unfamiliar new situations requires human intelligence. This is why the Bundeswehr clearly focuses on the training of its soldiers – and this training includes, and underlines, the field of ethics.

It is in precisely this field that the enduring weaknesses of artificial intelligence are apparent:

 

  • An inability to cope with unpredictable events or overly complex tasks,
  • very limited task-oriented flexibility, and
  • no ability to improvise.

Thus it is highly likely that combat robots would have only a very limited range of applications in operations against adversaries who were reasonably evenly matched in technological terms. On the other hand, technologically inferior adversaries tend not to engage in open combat, preferring instead to attempt ambushes or other ways of achieving their objectives.

It would also be appropriate to investigate whether combat robots should be deployed together with soldiers, and whether – for example – a squad leader can lead a squad of, for example, five soldiers and two combat robots during a military operation. The question of whether soldiers can learn to trust robots also needs to be examined. Communication between humans and machines, and between robots, must function smoothly. Communication channels must be secure, and the combat robots must be protected against hackers.

The situations in which the use of a combat robot is deemed appropriate need to be clearly identified before the operation. Another challenge is the logistical arrangements for combat robots, e. g. their power and ammunition supply in combat. Provisions also have to be made for safely recovering damaged combat robots – possibly using a recovery robot.

In summary, therefore, the value of such combat robots is highly questionable from a military point of view. Many unresolved pragmatic issues are met with technologically doubtful promises. Will future armed forces even be able to afford to acquire systems that will probably have only very limited deployment options? Here too, it becomes apparent that not everything which seems technically possible in theory is actually useful in military terms.

Robot moral

The decisive point of view from which the problem as a whole should be judged is the ethical and legal perspective. Unfortunately, not everyone shares the same legal and ethical opinions. Views differ even among allies.

When weapons are used against people, the question of responsibility always needs to be asked, since International Humanitarian Law holds that someone must be responsible. Yet with autonomous systems, it would be unclear who bears responsibility. It could be the commanding officer, the manufacturer, or even the programmer. Even if a person threatens a machine, for example, it would be legally unjustified for that person to be killed by the machine. Whether to use firearms or force against people is and will remain a deeply ethical decision. A person who has killed must subsequently come to terms with the consequences of its actions. Cultural background and ethical principles play a decisive role in how armed forces deal with this challenge. In our society, the use of force against people is a punishable act.

According to the basic understanding of the Bundeswehr, a military operation is never a matter of killing as many of the enemy as possible, but rather of incapacitating the enemy in order to achieve one’s own objectives. Therefore, soldiers in combat aim to render their opponents unable to fight, not to kill them. However, the death of enemy combatants in battle cannot always be avoided.

The ethical principle of not using more force than absolutely necessary is at issue here. It is undisputed that soldiers in combat can make mistakes, for example because they get carried away by emotions. This is one of the main arguments used by proponents of combat robots. Moreover, it is only through appropriate ethical training – which includes training in ethical principles – that the Bundeswehr ensures that its soldiers achieve their objectives and carry out their tasks in combat situations in a well-considered manner and with minimal use of force. Yet significant doubts exist as to whether a computer algorithm can even come close to coping with this complexity.

Robots as potential enemies

Apart from impacting people and their environment, the use of weapon systems has a security political dimension. Particularly in the drone debate – including in the U.S. – the question is increasingly raised of whether the deployment of combat drones for targeted killing might in fact have generated more resistance, and whether strategic objectives have actually been achieved. This question would be asked even more often if one side ended up employing increasingly or exclusively machines in combat.

With the proliferation of civilian robotics research, there is a growing danger of dual use, i. e. that civilian applications will be adapted for other purposes. This needs to be monitored and assessed within the context of national risk management and security planning.

It is important for the Bundeswehr to address long-term risk management. Some potential enemies may have different legal and ethical principles, and might use robots that the Bundeswehr would refrain from using. If the international community is unable to uphold a policy of self-restraint – as demanded by Human Rights Watch – then the use of combat robots with their perhaps dramatically shorter response time and greater accuracy presents a high future risk potential for Bundeswehr soldiers. Hence the development of supporting robotic systems for Bundeswehr soldiers and strategies for dealing with threats from robotic or partially autonomous systems should be promoted.

From various different perspectives, this article brings objectivity to the burgeoning debate on the use of (killer) robots by armed forces. None of the arguments from the different points of view supports the development or indeed the use of these systems. On the contrary, there are very many arguments against such developments.

In deciding whether to use systems that operate without human interaction, we should carefully consider whether such use is ethically and legally justifiable. One thing, however, should now be perfectly clear: the soldiers of the Bundeswehr cannot and will not be replaced by robots.

Summary

Jörg Wellbrink

Jörg Wellbrink is Lieutenant Colonel (GS) and has been working in the Bundeswehr’s department of future research since March 2012, where he was responsible for the department’s products on an acting basis until February 2014. In 1985, he graduated from the Bundeswehr University, Munich in electrical engineering. In 1998, Wellbrink studied Operations Research (M.Sc. OR) at the Naval Postgraduate School in Monterey, California. He gained his doctorate there in 2003 at the school’s MOVES Institute partly about simulating human behaviour with multi-agent systems. As project initiator and leader in the Bundeswehr IT office, he designed the Bundeswehr’s simulation and test environment. In 2007, he was the first leader of the Bundeswehr Operations Research cell in the Kunduz Provincial Reconstruction Team (PRT). 2011, he was Visiting Lecturer for OR in research and teaching at the Bundeswehr University, Munich.

JoergWellbrink@bundeswehr.org


Download PDF here

All articles in this issue

Lethal Autonomous Systems and the Plight of the Non-combatant
Ronald C. Arkin
The Need for a Preemptive Prohibition on Fully Autonomous Weapons
Stephen Goose
Of Men and Machines. What does the Robotization of the Military Mean from an Ethical Perspective?
Bernhard Koch
Remote-Controlled Aerial Vehicles – Made-to-Measure Effectiveness for Better Protection of Our Soldiers on Missions
Karl Müllner
Armed Drones: Legal Issues from an International Law Perspective
Stefan Oeter
Killing by Drones: The Problematic Practice of U.S. Drone Warfare
Peter Rudolf
Drones, Robots and the Ethics of War
Daniel Statman
My new Fellow Soldier - Corporal Robot?
Jörg Wellbrink

Specials

Harald J. Freyberger Michael D. Matthews