Controversies in Military Ethics & Security Policy
AI for the Armed Forces Does not Need a Special Morality! A brief argument concerning the regulation of autonomous weapons systems
Since 2014, discussions have taken place in Geneva, Switzerland, on lethal autonomous weapons systems (LAWS) in the context of the Convention on Certain Conventional Weapons (CCW). In 2019, eleven guiding principles were published.[1] The aim has been to prohibit or restrict the use of certain conventional weapons, which are expected to be excessively injurious or to have indiscriminate effects, in terms of failing to distinguish between combatants and civilians.
But the negotiations in Geneva have stalled, mainly but not only because of increasing geopolitical tensions and rivalries. For example, there has been disagreement over definitions and boundaries aimed to encompass the constant and rapid progress of technologies driven by artificial intelligence (AI) and implemented in practical applications. The diplomatic/democratic struggle plays out in the areas where economic, research and security interests come into conflict. Apart from the dangers and risks, many expect these new technologies to bring extraordinary benefits – at least to those who are able to embrace and wield them effectively and disruptively. Expectations are equally high in both civilian and military quarters. There are fears among the research and business community as well as in the armed forces that policymakers will apply the brakes too quickly or too hard, against the interests of the people.
On March 13, 2024, the AI Act[2] was finally adopted by the European Parliament. This piece of European Union legislation, which has been labelled as “historic”, explicitly does not apply to military applications.[3] Negotiations on AI-driven autonomous systems are being conducted separately for military and civilian applications. On the one hand, this helps to focus public fears on economic and social applications. But at the same time, it contributes to an even greater overestimation of the imagined benefits, because the military applications are tactically hidden from view and taken out of the societal equation.
Google’s AI team recently published a paper that clearly describes the interaction between AI as one side of technology, and the autonomy that AI applications enable.[4] They point out the need to differentiate these two sides, and that they (can) reinforce each other (if and insofar as this is desired). The article proposes a strategic nomenclature that can already be used today as a useful framework for the pragmatic classification of current and future military AI applications – regardless of what future scientific or political impacts they might have.
Regardless of how the future discussions on AI and autonomy develop, there is no doubt that their separate assessment according to military and civilian applications primarily serves political tactics. In practice, the artificial dividing wall erected between them is extremely thin. This is evident in the converted commercial drones used in the war in Ukraine, for example, but also in the EU’s extensive dual-use regulations, which run to hundreds of pages.[5] It is precisely its ability to enhance human autonomy or the autonomy attributed to robots that makes AI a typical dual-use product. To get to grips with the situation, political efforts are currently shifting to the hardware components needed to develop AI.
In light of these issues, it is worth taking a broad view of AI and its applications. UNESCO has done this, presenting its results at the end of 2019 following an inclusive global negotiation process. The UNESCO recommendation was adopted by 193 countries.[6] Unlike the EU in its AI Act, UNESCO does not pursue a risk-based approach. Instead, it takes the values and principles that underpin our civilizations as its starting point. If you compare its recommendations on AI ethics with the eleven Guiding Principles of the group of governmental experts for the further development of the CCW in Geneva, you will notice that the latter are very concise and general, and fit on one page. This minimal consensus contrasts with years of ongoing discussions and negotiations on the definition of autonomous weapons systems and the required extent of human involvement in their use – without any breakthrough in regulatory efforts to date.
The basic principle
Under the UNESCO recommendation, no human being or human community should be harmed or subordinated during any phase of the life cycle of AI systems (point 14). From this principle, which is derived directly from the inviolable dignity of every human, the rule follows that life and death decisions cannot be ceded to AI systems. Whether as a job seeker or as an applicant for a loan or social benefit, as a patient, citizen, criminal or soldier, no human being should be subject solely to automated decision-making.[7]
In contrast, the Guiding Principles seek a balance between military necessity and humanitarian considerations (point k). The UNESCO recommendations regard AI ethics as a task for society that must be guided by human rights and international law. They point to a practicable way to leave autonomous technology in the hands of humans as subjects. If we adopt this fundamental standpoint – which, ever since Kant, has also been the one assumed by those who have power over others – then there is no need for a special morality for autonomous weapons systems.
The German Ethics Council also emphasizes this principle, in line with its philosophical approach, in its opinion of March 2023: “According to the philosophical theory of action, machines cannot act and cannot be considered as genuine actors with responsibility. Nevertheless, they influence human action, which in modern societies increasingly takes place in a sociotechnological setting.”[8]
In contrast, the Geneva group remains abstract in the formulation of its Guiding Principles, when it merely demands that humans must ultimately retain responsibility for the use of autonomous weapons systems (which could also be directed at people) (points b, d). This is where all considerations relating to the familiar refrain of the “human in the loop” come into play.
Ultimately, the basic ethical principle on which UNESCO bases its recommendations means that autonomous weapons systems may only directly target enemy weapons systems. Such a technical and ethical shift in the discussion on autonomous weapons systems could raise the negotiations in Geneva to a more productive level. The negotiating delegations within the CCW ought to agree on this basic principle, since it has already been recognized by their countries as part of the UNESCO framework. The group could then take a new approach on this basis and regulate the use of autonomous weapons systems in the event of war and within the bounds of applicable international humanitarian law.
Defense preserving the principle of proportionality
Automated and autonomous defense systems would always be permitted if they did not directly target people, but rather their means of attack. As with any attack, the principle of a proportionate response would apply in this case also. Just like in any war, soldiers fighting with offensive weapons would be exposed to the risk of becoming indirect victims of this kind of machine-operated defense.
The technical capability to program systems that disable weapons aimed at them faster, more precisely, more efficiently and proportionately is within the realm of possibility and such systems are already in use. Of course, as with all technical processes and human actions, the risk of accidents cannot be entirely ruled out.
A general ban on autonomous weapons systems is just as utopian as a ban on war. So it is proposed here that the general ethical values and principles of the UNESCO recommendations should be specifically applied in the 193 member countries,[9] and the development, production and use of autonomous weapons systems should not be treated as a separate case.
Hard-to-define autonomous weapons systems raise the specter for all geopolitical adversaries of a horror scenario of total war. The so-called balance of terror refers to nuclear powers’ ability to annihilate each other. Both sides are supposed to be deterred from a nuclear escalation because they can surely expect a destructive counter-strike, which – as in the case of the Soviet Dead Hand system – would even be triggered automatically if the political and military leadership were no longer alive following the first strike. An arms race in AI-assisted weapons systems would now once again significantly heighten the risk of a war sparked by malfunctions, unanticipated interaction and lack of human control – a war which could end up destroying the whole world.
The best way to prevent the uncontrolled use of swarms of autonomous weapons on land, at sea and in the air is for talks to take place between potential warring parties, who are prepared to exercise restraint out of self-interest. On such a basis, the technical and verifiable prohibition of autonomous weapons systems directly targeting humans could be acceptable. So too could self-defending autonomous systems, provided they are under the ultimate responsibility and control of the people conducting the war. In a similar spirit, “dead man’s switches” should be ruled out in the interests of the survival of at least some humans.
Any public discussion of this highly controversial subject is useful if it encourages people to follow and engage in the debate. The UNESCO recommendations are a hopeful sign – also for military ethics.
[4] Ringel Morris, Meredith et al. (all members of Google DeepMind) (2023): Levels of AGI: Operationalizing Progress on the Path to AGI. In: Google DeepMind 2023-11-04. arxiv.org/abs/2311.02462 (accessed December 26, 2023).
[7] This point has already passed into law with the EU General Data Protection Regulation (Article 22). “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” In addition to the exceptions provided for in the legislation, in particular the term “solely” is currently the subject of interpretation and discussion.
[9] At the time the recommendation was adopted in November 2021, the United States had not yet rejoined UNESCO.
Erny Gillen
Dr. Erny Gillen, born in 1960, is the founder and director of the Moral Factory in Luxembourg. From 2019 bis 2020, he provided professional support to the Luxembourg Army in the drafting of its Charter of Values and its Military Code of Conduct on behalf of the Ministry of Defense. Erny Gillen studied theology in Switzerland and Belgium, obtained a doctorate in Catholic ethics and was, among other things, President of Caritas Europa and Vicar General of the Archdiocese of Luxembourg.