Controversies in Military Ethics & Security Policy
Humanity in War? The Importance of Meaningful Human Control for the Regulation of Autonomous Weapons Systems
The problem of diffused responsibility
The use of (partially) automated weapons systems has become an integral feature of warfare today. It is argued, among other things, that they offer states a cost-effective and available alternative to the sometimes scarce resources of soldiers and ammunition. Besides that, according to some, they are able to make faster and more rational decisions than humans. A look at the state of the art of (partially) automated weapons systems shows how rapidly they are developing. This may be partly due to recent wars, which have given a huge boost to the development and production of automated and autonomous weapons systems. Even if it is not a weapons system in the strict sense, Israel’s “Lavender” system, for example, illustrates the potential that (partially) automated military systems could have in the future. This system quickly analyzes large amounts of data with a view to possible human targets.[1] The Israel Defense Forces (IDF) insist that this serves only to assist analysts in identifying targets, in accordance with all requirements of international law.[2] So although the Lavender system itself does not (yet) play an active role in killing people, it can certainly have an indirect influence on soldiers’ decisions to kill. Moreover, it cannot be ruled out that similar systems will be integrated into (partially) automated weapons systems, which could lead to greater automation of the “kill chain”. Current developments therefore indicate an urgent need to discuss the regulation of weapons systems.
Some proponents argue that learning systems could perhaps make wars “more just” and “more humane”.[3] At the same time, others vehemently oppose the use of (lethal) autonomous weapons systems (LAWS).[4] These critics fear that their use objectifies humans, who become a mere statistical variable in the calculations of the autonomous weapons system.[5] On top of this are the unknown risks of such weapons systems, which are capable of wreaking immense damage. Many of these voices favor the use of (partially) automated weapons systems, which are still subject to human review.
One of the many challenges associated with this technology is that our (criminal) laws are not designed to cope with automated/autonomous machines. This leads to complications in the application of the law[6] and could ultimately mean either that no-one is held accountable, resulting in accountability gaps, or that responsibility is attributed inappropriately. The latter occurs when users are held responsible even though they themselves had little influence on the outcome. This is problematic not least because there are doubts in some cases as to whether it is even possible to comply with applicable law, and in particular international humanitarian law, when autonomous weapons systems are used without human involvement in the decision loop. To give just one example, there is a critical question as to whether they can uphold the distinction principle.[7] This principle states that killing civilians is a war crime in most cases, while enemy soldiers may generally be killed. Due to many special circumstances and factors, not every conceivable situation (or situation which might be unimaginable in advance) can be implemented in the autonomous weapons systems. At the same time, an autonomous weapons system must, for example, be able to reliably distinguish between an incapacitated soldier, who enjoys protection comparable to that of a civilian, and a fighting soldier.
If an autonomous weapons system violates existing law, the question of individual responsibility under (international) criminal law arises. Who is to be held accountable if civilians are unlawfully killed? This question arises not only in the case of fully autonomous weapons systems, but is also at least as relevant when a human being remains in the decision loop in (partially) automated weapons systems. Should this person be held accountable to the same extent as in the case of conventional weapons systems, even if they may be able to exert less influence? This question needs to be answered soon in order to avoid accountability gaps and establish a secure legal basis for the use of these weapons systems – both for military personnel and for the (international) community. Otherwise, as Geiß noted back in 2015, there is a risk that the provisions of international law will once again come “one war too late”.[8]
The issue centers on the autonomy of autonomous weapons systems, as this is the one key difference between weapons systems that still involve a human, and those which dispense with humans altogether. But it is not yet clear how the term “autonomy” should be interpreted, or how different levels of autonomy should be handled.[9] Nevertheless, a possible consensus is emerging, which this article follows: Autonomous means that once activated, autonomous weapons systems perform actions on their own without human intervention.[10] Some people refer to this as “human out of the loop”. Autonomous weapons systems are therefore only those that act fully autonomously; this does not include (partially) automated weapons systems. In the latter, there is still a human in the decision-making chain – referred to as “human in the loop” or “on the loop”. Here the human monitors the machine’s operational processes and makes the decision to kill.[11] The underlying human-machine interaction can be designed in various ways. Sharkey has identified five different types of autonomous weapons systems based on the increasing degree of automation.[12] At the lowest level, the human determines the target. As the level of autonomy increases, the weapons system either suggests several possible human targets, or already identifies a target itself and the human has only to give the final kill order. Finally, it is conceivable that the human does not actively order the killing, but only intervenes if they do not agree with the machine’s selection. At the highest level of autonomy, humans are not given any opportunity to intervene.
This article focuses on increasingly autonomous weapons systems in which humans are involved in the decision to kill in some way. It is here in particular that the question arises of whether the human conduct involved is or ought to be still sufficient to make the human criminally accountable. However, it should also be borne in mind that even in the case of fully autonomous systems, humans are still ultimately involved in the decision insofar as they activate the system. “Human decision-makers” can always be found somewhere in the decision loop.
Realignment of criminal law?
Because some kinds of weapons systems have the ability to carry out at least some elements of the actions in the “kill chain” autonomously, they differ significantly from conventional weapons (systems). The human operator no longer has the same ability to control the respective weapons system, even if they are involved at certain points in the decision-making chain. Firstly, the human operator relies on the (partially) automated weapons system to carry out the specified action. Secondly, the human decision is based on the information – in some cases pre-filtered – received from machines.[13] Typically, human operators have neither the ability nor the time[14] to check the accuracy or completeness of the suggestions made by these (partially) automated weapons systems, or to obtain other information; the human operator has to rely on the system. A significant example that illustrates these points is the Lavender AI system mentioned earlier. It analyzes large amounts of data for possible targets, and therefore assists the human analyst in the target identification process. It can hardly be expected that an extensive human review of its findings will be carried out. To counter the argument that this is a new problem, it is pointed out that even without the use of (partially) automated weapons systems, decisions are made on the basis of pre-sorted and automated information.[15] Nevertheless, issues such as automation bias – i.e. an (unjustified) excessive trust in machine-generated results – are a bigger problem with the new weapons systems.[16] Even if this is not fundamentally a new phenomenon, the increasing autonomization of weapons systems amplifies human bias, the time pressure to make a decision, and the resulting psychological stress as well as the difficulty of deciding against the machine. Not least, this is because the human decision-maker knows that they will have to justify themselves if they decide against a decision by the (partially) automated weapons system that turns out in retrospect to have been correct.
However, we should also take into consideration that (partially) automated weapons systems are intended to relieve the burden on humans. For example, they are supposed to assist them in making decisions. This requires, in turn, that the human does not have to constantly monitor and comprehensively control the (partially) automated weapons system.[17] So if we are to accept the use of even only (partially) automated weapons systems, it is inevitable that they will influence human decisions. With increasing interaction between humans and machines, the role of humans in the decision-making chain will diminish.[18] And this means that there is at least a risk that their legal accountability will also decrease, unless countermeasures are taken to ensure effective human control in the decision-making process (for example through “meaningful human control”).
Effective control as a consequence of individual criminal responsibility
International criminal law does not differentiate between conventional and autonomous weapons (systems), and to date there is also no generally recognized adaptation of the attribution of responsibility to deal with the special characteristics of learning systems. There is a risk that the human operator who is involved in the (partially) automated weapons system’s decision to kill will be treated inappropriately. With regard to this problem, this article will consider only the Rome Statute of the International Criminal Court, and the German Code of Crimes Against International Law (Völkerstrafgesetzbuch, VStGB). Both legal texts recognize the principle of individual responsibility – also in the context of armed conflicts.[19] Their understanding of individual responsibility basically means that a person can be held accountable for their own misconduct. This also applies if the person is acting within a military hierarchy. Possible misconduct includes, for example, the unlawful killing of civilians or incapacitated soldiers. As mentioned above, this usually constitutes a war crime. According to applicable law, the last human involved must normally be held criminally responsible, as it was them who authorized the kill or did not give the order to abort.
Without going into detail at this point, the understanding of individual criminal responsibility cannot deal with the special characteristics of (partially) automated weapons systems in all scenarios. For example, neither causality nor intent convincingly make allowance for human dependence on the machine. The causality element requires only that the human contribution is necessary for the action to be carried out by the (partially) automated weapons system.[20] The pressing of a firing button by the human operator would suffice for this. But this does not take sufficient account of the fact that the decision prepared by learning systems is made under psychological pressure, especially if the operator did not have enough time to make their decision. These considerations could possibly be accommodated under the intent requirement. Yet intent does not take into account the potential inhibitions against deciding against the machine, or the lack of time in which to consider one’s decision. This means that the human operator may be acting causally and intentionally and is therefore criminally responsible, even though their decision-making ability is in fact severely limited.
Therefore, a further element is required in order to take these special characteristics of (partially) automated weapons systems adequately into account. This element must ensure that the human operator is held accountable only when appropriate. In particular, with regard to the rather hybrid role of learning systems, which sit between conventional weapons systems and humans, it must be possible to attribute the decisions made in cooperation with such systems to humans. The need for effective control thus arises directly from the concept of individual criminal responsibility.
Effective control as a requirement arising from the concept of individual responsibility
A large part of the international community seems to be of the opinion that autonomous weapons systems need to be regulated. For example, there are calls for every use of such weapons systems to be subject to human control – in other words, that ultimately only (partially) automated systems should be permitted.
This is supported by the guiding principles of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (2019). These state that human-machine interaction should take place “within a responsible chain of human command and control”.[21] This acknowledges that the program code of these technologies can never be complete. For example, rules of conduct can hardly be translated into program code.[22] Autonomous weapons systems are also often unable to respond appropriately to unforeseen and confusing situations (“fog of war”), because not every situation has been programmed into them.[23] The difficulty described above regarding the distinction between persons who are protected under international humanitarian law (e.g. civilians, incapacitated soldiers) and combatants also comes into play here. Therefore, the actions of autonomous weapons systems always require human review. That way, it can be ensured, among other things, that the careful balancing which is often required under International Humanitarian Law is carried out in accordance with the law. Moreover, ethical concerns about machines being allowed to kill people can also be dispelled through human control.
However, this requires that humans are able to exercise effective control, and that this control is not undermined by the decision-making pressure or stress associated with war or combat operations in such a way that the weapons system is, in practice, allowed to make the decisions. In that case, the human decision would only be a cover. An approach[24] favored by many policymakers, non-governmental organizations and academic commentators is “meaningful human control”, which is currently being discussed in the research literature as well as at conferences around the world. This approach features a readily understandable term that means something to everyone, and at the same time leaves plenty of room for interpretation.[25] At core, meaningful human control recognizes that not just any kind of human control can suffice – such as simply pressing a firing button.[26] There has to be a normative hurdle, which is expressed with the (replaceable) term “meaningful”.[27] Humans must have the ability to exert an effective influence on the behavior of the weapons system[28] by being able to monitor, control and correct it. This ensures that decisions – such as the ethically fundamental, critical decision to kill – are still made by humans.
What meaningful human control requires
So far, however, there is no agreement on exactly when such meaningful human control exists. One of the reasons for this is that very many different perspectives have to be taken into account. For example, it is not sufficient to consider only the decision-making situation itself – both legally and ethically. All actions and suggestions of the machines are based on their program code. The (partially) automated weapons systems should be programmed in such a way that they provide human operators with the necessary information, completely and comprehensibly.[29] Training is also required to ensure that the human operators are physically and mentally able to control the weapons system.[30] In this context, Trabucco talks about considering the entire “life cycle” of the machine.[31]
Besides (partially) automated weapons systems, it must also be acknowledged that there are many other learning systems to which the principles of meaningful human control can be applied. This shows that there cannot be one single concept, but that meaningful human control should be understood as existing on a scale, taking into account the systems’ level of danger and increasing levels of autonomy.[32] Thus the requirements for human control are divergent; they will be different for reconnaissance drones, for example, than for loitering weapons capable of killing people. Meaningful human control must therefore be regarded as a broad concept. At the same time, there seems to be a lowest common denominator regarding the requirements that (partially) automated weapons systems in a military context typically have to meet in order to be considered as having meaningful human control. Kwik argues for the establishment of a uniform framework to prevent the concept becoming fragmented by too many individual elements which are also described differently.[33] In order to reach a consensus, meaningful human control must have a clear and concise set of criteria. This will make it clear that the individual facets impact on or influence one another.[34] Kwik analyzed a total of five major facets, which can be broken down further: awareness, weaponeering, context control, predictability and accountability.[35] It is particularly important to have information that is as complete and accurate as possible in the circumstances at the time.[36] Only comprehensive information in the circumstances at the time makes control possible.[37] The awareness criterion relates primarily to knowledge of how the (partially) automated weapons system works, and about the target and context in which the system will operate.The weaponeering requirement is intended to ensure that (partially) automated weapons systems are used in such a way that the desired effect is achieved.[38] Context control ensures, among other things, that the human operator can interrupt or change the actions of the (partially) automated weapons system, and by regulating the scope of the mission limit its impact.[39] Furthermore, the actions of the machine must be predictable.[40] This means that the (partially) automated weapons system essentially behaves as the operator expects it to. Furthermore, the way in which the (partially) automated weapons system makes its suggestions must be comprehensible (“explainable AI”).[41] This allows the person making the final decision to understand the suggestions and make a decision based on this understanding. Ultimately, the aim is to empower humans to remain ethically and legally responsible for the results of the (partially) automated weapons system’s actions.
Another approach to elucidating meaningful human control consists in defining the roles of the human operator in the use of (partially) automated weapons systems. Amoroso and Tamburrini highlight three main roles that partially correspond to Kwik’s categories: First, the human must be able to prevent machine malfunctions (“fail-safe actor”). Second, they should fulfill the legal requirements for accountability (“accountability attractor”). Third, only the human should be allowed to make critical decisions, such as life-or-death decisions (“moral agency enactor”).[42]
These are just a few approaches to defining meaningful human control. The list could be extended considerably, but that would go beyond the scope of this article.
Outlook
Meaningful human control is one aspect of the understanding of individual criminal accountability and responsibility outlined here. At the same time, it is a promising approach toward enabling an appropriate attribution of responsibility. However, discussion of the concept is still in its early stages. In particular, specific and practicable requirements need to be determined in order to achieve meaningful human control.
The interdisciplinary network “Meaningful Human Control. Autonomous Weapon Systems between Regulation and Reflexion” is working on these questions.[43] Researchers and fellows from a wide range of disciplines such as robotics, law, sociology, physics, political science, gender studies and media studies are involved in this network. Their common goal is to analyze and link previously unconnected problem areas in order to develop a concept of meaningful human control. The concept is intended to ensure that interaction between humans and machines is human-centered. Among other things, the last human involved should only be held criminally accountable if this is appropriate. This requires interdisciplinary debates on the criteria of meaningful human control.
[3] Geiß, Robin (2015): Die völkerrechtliche Dimension autonomer Waffensysteme. Friedrich-Ebert-Stiftung, Berlin, Internationale Politikanalyse, p. 13.
[4] Asaro, Peter (2012): On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. In: International Review of the Red Cross 94 (886), pp. 687-709, p. 694; Misselhorn, Catrin (2019): Autonome Waffensysteme/Kriegsroboter. Wiesbaden, p. 321; Sharkey, Noel (2016): Staying in the loop: human supervisory control of weapons. In: Bhuta, Nehal et al. (eds.): Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, pp. 23-38, p. 26.
[6] Cf. only: Article 103 (2) of the German Basic Law (Grundgesetz); Article 22 (2) of the Rome Statute.
[7] Dahlmann, Anja, Hoffberger-Pippan, Elisabeth and Wachs, Lydia (2021), see endnote 5, p. 4; Ferl, Anna-Katharina (2023): Imagining Meaningful Human Control: Autonomous Weapons and the (De-) Legitimisation of Future Warfare. In: Global Society 38 (1), pp. 139-155, p. 142.
[8] (Translated from German.) Geiß, Robin (2015), see endnote 3, p. 5.
[9] Ferl, Anna-Katharina (2023), see endnote 7, p. 141.
[10] Amoroso, Daniele and Tamburrini, Guglielmo (2020): Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues. In: Current Robotics Reports 1, pp. 187-194, p. 187; Asaro, Peter (2012), see endnote 4, p. 690; Sharkey, Noel (2016), see endnote 4, p. 23; Sparrow, Robert (2007): Killer Robots. In: Journal of Applied Philosophy 24 (1), pp. 62-77, p. 65.
[11] Christie, Edward Hunter et al. (2023): Regulating lethal autonomous weapon systems: exploring the challenges of explainability and traceability. In: AI and Ethics 4, pp. 229-245, p. 230; Docherty, Bonnie (2012): Losing humanity: the case against killer robots. Human Rights Watch, Amsterdam, Berlin; Misselhorn, Catrin (2019), see endnote 4, p. 321.
[12] Sharkey, Noel (2016), see endnote 4, pp. 34 ff.
[13] Beck, Susanne, Faber, Michelle and Gerndt, Simon (2023): Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege. In: Ethik in der Medizin. Official Journal of the German Academy of Ethics in Medicine 35, pp. 247-263, p. 256.
[15] Dahlmann, Anja, Hoffberger-Pippan, Elisabeth and Wachs, Lydia (2021), see endnote 5, p. 6.
[16] Beck, Susanne, Faber, Michelle and Gerndt, Simon (2023), see endnote 13, p. 256.
[17] Beck, Susanne (2020): Die Diffusion strafrechtlicher Verantwortlichkeit durch Digitalisierung und Lernende Systeme. In: Zeitschrift für Internationale Strafrechtsdogmatik 15 (2), pp. 41-50, p. 46. www.zis-online.com/dat/artikel/2020_2_1343.pdf.
[19] For the VStGB: Beck, Susanne (2020), see endnote 17, p. 47; Lohmann, Anna (2021): Strafrecht im Zeitalter von Künstlicher Intelligenz: Der Einfluss von autonomen Systemen und KI auf die tradierten strafrechtlichen Verantwortungsstrukturen. Baden-Baden, p. 86; for the Rome Statute: Art. 25 (1), (2) Rome Statute; Satzger, Helmut (2022): Internationales und Europäisches Strafrecht: Strafanwendungsrecht, Europäisches Straf- und Strafverfahrensrecht, Völkerstrafrecht. Baden-Baden, pp. 389 f.
[20] Ambos, Kai (2019): Internationales Strafrecht: Strafanwendungsrecht, Völkerstrafrecht, Europäisches Strafrecht, Rechtshilfe. Munich, chapter 7, margin no. 3; Esser, Robert and Gerson, Oliver Harry (2023): § 2 VStGB. In: Leipziger Kommentar StGB, Völkerstrafgesetzbuch. De Gruyter. Margin no. 16.
[21] Letter d) of the guiding principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, CCW/MSP/2019/9.
[27] Amoroso, Daniele and Tamburrini, Guglielmo (2020), see endnote 10, p. 189; Article 36 (2016), see endnote 24, p. 2.
[28] Article 36 (2016), see endnote 24, pp. 2 f.; UNIDIR (2014), see endnote 25, p. 3; Veluwenkamp, Herman (2022): Reasons for Meaningful Human Control. In: Ethics and Information Technology 24, p. 2. link.springer.com/article/10.1007/s10676-022-09673-8.
[29] Amoroso, Daniele and Tamburrini, Guglielmo (2021), see endnote 25, p. 264.
[32] Amoroso, Daniele and Tamburrini, Guglielmo (2020), see endnote 10, p. 190; Santoni De Sio, Filippo and Van Den Hoven, Jeroen (2018), see endnote 30, p. 10.
[33] Kwik, Jonathan (2022): A Practicable Operationalisation of Meaningful Human Control. In: LAWS 11, 43, p. 3.
[34] Kwik, Jonathan (2022), see endnote 33, p. 15.
[36] Article 36 (2016), see endnote 24, p. 4; Dahlmann, Anja, Hoffberger-Pippan, Elisabeth and Wachs, Lydia (2021), see endnote 5, p. 3; Santoni De Sio, Filippo and Van Den Hoven, Jeroen (2018), see endnote 30, p. 10.
[38] Kwik, Jonathan (2022), see endnote 33, p. 11.
[39] Kwik, Jonathan (2022), see endnote 33, p. 13.
[40] Article 36 (2016), see endnote 24, p. 4; UNIDIR (2014), see endnote 25, pp. 5 f.
[41] Amoroso, Daniele and Tamburrini, Guglielmo (2021), see endnote 25, p. 264; Article 36 (2016), see endnote 24, p. 3; UNIDIR (2014), see endnote 25, p. 6.
[42] Amoroso, Daniele and Tamburrini, Guglielmo (2020), see endnote 10, p. 189.
[43] The project is funded by the Federal Ministry of Education and Research under the funding code 01UG2206B. For more information, see meaningfulhumancontrol.de.
Susanne Beck is Professor of Criminal Law, Criminal Procedure Law, Comparative Criminal Law and Philosophy of Law at Leibniz University Hannover. She heads the legal sub-project of the MEHUCO competence network (“Meaningful Human Control. Autonomous Weapon Systems between Regulation and Reflexion”).
Schirin Barlag is a research associate and doctoral student in the Faculty of Law at Leibniz University Hannover. She is a researcher in the legal sub-project of the MEHUCO competence network (“Meaningful Human Control. Autonomous Weapon Systems between Regulation and Reflexion”).