Of Men and Machines. What Does the Robotization of the Military Mean from an Ethical Perspective?
It usually takes specific events to bring academic debates into public discourse. Since the beginning of the new millennium, the idea has been in circulation that we are witnessing a revolution in military affairs akin to the advent of firearms or the emergence of aerial warfare. Back in 2005, the British sociologist Martin Shaw wrote that the new Western method of waging war is the risk-transfer war, stating that one of its rules is that the number of Western casualties must be as small as possible. It was not surprising, therefore, that a few years later, in 2012, the German armed forces – the Bundeswehr – and German defense ministry wanted to acquire remote-controlled weapons delivery systems for the better protection of their soldiers during operations, indeed particularly the sort that fly through the air and are commonly called, after male bees, “drones”. However, armed drones are associated with a practice carried out by the U.S. in their fight against groups suspected of terrorism in Afghanistan, Pakistan and Yemen, commonly known as “targeted killing” – specifically because wanted persons are tracked down, watched, and then killed by a guided missile fired from the drone. For many good reasons, this practice is regarded as being immoral. Certainly by the time that high-ranking representatives of the Catholic episcopacy in Germany issued statements critical of the Bundeswehr’s procurement plans, a “drone debate” had been ignited, which subsequently continued in the parliamentary sphere.
It is understandable that debates become emotional when the subject is killing and the risk of being killed. Yet particularly in ethics, one should endeavor to adopt a dispassionate and unprejudiced approach. So what is the nature of this “must” of which Martin Shaw speaks? Why must – as he says – Western wars today be conducted in such a way as to minimize the number of Western casualties? Is it merely democratic pressure on politicians that induces them to find ways and means to minimize the risks for soldiers, because otherwise they will be voted out of office? Or is this “must” an imperative that soldiers themselves present to their military and civilian masters, with the result that these leaders are obliged for purely functional reasons to listen, because otherwise they would be faced with refusal to carry out orders? Or is this “must” also an expression of moral reason, because Western soldiers are called upon, on behalf of others who were unjustly harmed – e.g. in terrorist attacks – to fight against opponents who for their part deliberately set out to hurt persons whom the West regards as “innocent” – i.e. people who have done nothing that would cause them to forfeit their own right to life and physical integrity? Why should soldiers forfeit their rights when they come to the assistance of such people – whether they be victims of terrorism or of crass human rights violations? Particularly from an ethical point of view, shouldn’t one exploit all available means of enhancing these soldiers’ protection – which of course also means keeping them as far away as possible from the action? Drones and the use of military robotics technology therefore seem to be the obvious method of choice, particularly since they not only protect the operators but also – according to proponents of these machines – reduce the number of civilians put at risk by military action, because they allow better reconnaissance and more accurate weapons fire. Distance from the enemy’s reach, but closeness and hence greater precision in reconnaissance and the use of weapons by one’s own side – this seems to be the combination that makes drones and all remote-controlled systems so attractive.
Yet Martin Shaw’s thesis was not that the West today fights risk-minimization wars, but risk-transfer wars. This is where drone opponents’ voices come in. Even if we admit that the protection of soldiers is important in an armed conflict, we have to acknowledge that any such enhancement of protection cannot be achieved ceteris paribus, that is, with all other circumstances remaining the same. Thus there is the danger that fighting drone wars will render the traditional containment of combat zones obsolete, and so in principle war will be fought worldwide. Instead of better protection for civilians in war zones, civilians worldwide would now be perpetually exposed to military force. And civilians in war zones could be placed in more danger because the drone operators’ inhibition threshold for using military force is reduced when they are no longer personally in danger. Drone deployment could also increase the risks even for soldiers – namely if the supposedly “politically more palatable” alternative of remote-controlled warfare increases the willingness to take military action.
It is not yet clear how risks are actually transferred as a result of using remote-controlled military robotics technology. There is a large field for further empirical research here. But it is not at all clear in ethics, either, what degree of risk transfer can be regarded as acceptable. Should soldiers actually be allowed to shift all risks away from themselves, or are they not precisely the ones who are called upon to be professional risk-adopters today?
Opponents of the use of armed drones believe that such weapons systems should be seen in the context of a robotization and automation trend in warfare, culminating in a war of robots – which to some extent also implies the abolition of the conventional military. After all, the drones whose acquisition by the German armed forces is now a topic of controversy are not mere remote-controlled airplanes but rather the mobile part of a complex technological system, comprising ground station, communication channels and aircraft. They can take off and land by themselves; many processes are not controlled by operators (operator in the loop) and are only monitored (operator on the loop), while some are not even monitored (operator out of the loop). In the future, so this theory goes, human operators will be pushed ever further into the background in favor of fully automated control and decision processes. Opponents of armed drones point to the risk that software-controlled weapons systems could be reprogrammed and possibly even turned against the side that deploys them.
Currently, the German federal government is making assurances that weapons fire from drones will only ever be triggered by specific human action, and never by a software program. This is intended to meet the objections of those who oppose drone deployment. But here they violate the logic of technological progress, which the argument that “we can’t hold on to the stagecoach while everyone else is developing the train”1 is designed to appeal to: Why should the automation of weapons firing be permanently ruled out? In the future, there will be situations where the automated firing of a weapon protects human lives more effectively, for example because it eliminates time delays or negative emotional impulses from human operators that could make the situation more dangerous. The American roboticist Ronald C. Arkin, who also appears in this journal (pp. 3-10), produced a study for the Pentagon in which he says that automated weapons will be better at respecting international law than humans, and that the risks of
possible reprogramming are technologically surmountable.
People often associate the idea that drones will lead to the automation of killing with a phenomenon known as “big data”. In urgent situations, people today are often completely unable to assess the information provided to them by machines, including drones, or even to understand the calculations and assessments produced by a computer. So even where humans are supposed to take decisions, in reality they just receive orders and are increasingly at the mercy of machines. Ultimately, as a result, it seems reasonable simply to “let the machine take the decision”.
But here the debate arrives at a crucial point. In the strict sense of the word, the weapons system never “decides” for itself. Unfortunately this anthropomorphic figure of speech diverts the debate from the real problem, which is the diffusion of responsibility for the use of certain weapons. Thus the question should not be whether we ought to let robots take life-or-death decisions – for this is impossible – but rather whether we should allow machines to be used which afterward leave such great uncertainty as to who made what decision, that we are finally willing to believe the machine took the critical decision by itself. The logic of protection, in principle, knows no end. Protection and safety are not operational terms that can be empirically tested and measured. In principle, protection and safety can always be increased. The need for this increase appears to be one of the hallmarks of our eschatology-free age.
In his famous essay, first published in 1935, called “The Work of Art in the Age of Mechanical Reproduction”, Walter Benjamin contrasts two concepts of technology: one which is based on the unique and unsubstitutable use of the self as a medium, and a second which describes the endlessly repeatable, representative instrumental use of a distant object. The difference is that
“the first technology involves the human being as much, the second as little as possible. The greatest feat of the first technology is, in a manner of speaking, the human sacrifice; that of the second is in the line of remote-controlled airplanes that do not even require a crew.”2
Thus, almost eighty years ago, Benjamin named the poles which characterize many contemporary asymmetrical military conflicts. The Western drone warfarer, who keeps himself out of the danger zone as far as possible, contrasts with the Eastern suicide killer, who wants himself to be consumed in battle. The rationales are guided by two diametrically opposing aims: total preservation and total engagement. In Albert Camus’ play “The Just Assassins”, the protagonists argue over who will get the privilege of throwing the bomb at the Grand Duke and therefore being arrested and executed. A number of authors believe that the increasing use of drones and robotics technology by Western military powers will ultimately result in an increasing number of people from Eastern cultures that are willing to carry out suicide attacks. Yet this theory too awaits dispassionate scientific review.
1 In other words, we should not oppose technological progress. The “stagecoach” metaphor was used in a speech by the German defence minister Thomas de Maizière to the German Bundestag in January 2013: “Germany cannot ignore this technology of the future. We cannot say that we will hold on to the stagecoach while everyone else develops the train. That is not possible.”
2 Walter Benjamin, „Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit. Dritte Fassung“, in: Walter Benjamin: Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit, edited by Burkhardt Lindner, Kritische Gesamtausgabe vol. 16, Berlin, 96-163, 2012, here 108.
Bernhard Koch is deputy director of the Institute for Theology and Peace (ithf) in Hamburg. In 2014, he was a visiting fellow at the Institute for Ethics, Law and Armed Conflict (ELAC) at the University of Oxford. He teaches practical philosophy in Frankfurt and studied philosophy, logic and philosophy of science in Munich and Vienna. From 1999 to 2004, he worked at the Weingarten University of Education and received his doctorate from the Munich School of Philosophy with a dissertation about ancient philosophy. He was an associate lecturer at the Helmut Schmidt University of the Bundeswehr (Federal Armed Forces) in Hamburg (HSU-HH).