Skip to main content

Digital Escalation Potential: How Does AI Operate at the Limits of Reason?

Whom does AI protect?

There is an ethical problem that supposedly only arises when digitally controlled systems are used in military conflicts, and it boils down to this: Should machines decide over life and death? Autonomous systems that make AI-driven decisions about human lives are not the primary focus of this article. But the discussion about lethal autonomous weapons systems (LAWS) only highlights the problems of human-machine interaction. Aside from such fully autonomous weapons, the question still arises of what functions should be performed by artificial intelligence – for example when it comes to controlling drones – and how AI stands in relation to human decision-making powers. But actually, the questions we face here are very well-known ones. Questions about the ethics of technology always come to our attention as a result of new or even fictitious technologies. But in most cases their substantive core has existed for some time, only it is concealed by the fact that existing technologies no longer attract any particular attention because they have long become embedded in the collective consciousness. We should therefore assume that today’s ethical choices relating to large generative language models like ChatGPT or drones will also influence the way we deal with LAWS in the future.

Frighteningly, recent generations no longer seem to be afraid of industrialized war. We have examined its genesis around the turn of the 20th century, its martial reality in the trenches of the First World War, its escalation in the middle of the century and its constant recurrence in the proxy conflicts of the Cold War to such an extent that the link between warfare and high technology seems perfectly natural to us. Moreover, the military is a recognized driver of innovation. A wide range of technologies that were originally developed for military purposes have found their way into our everyday lives. In advanced medicine, too, it is no longer unusual for machines to play a part in the preservation and ending of life. So machines that make vital decisions have been a reality for a long time.

The normative discussion about AI in weapons systems is always influenced by ideological assumptions. The question of when a war begins and what actually constitutes peace, but also escalation[1] and de-escalation and, before that even, reason and artificiality are culturally determined concepts that influence our approach to this topic. Anyone who expects robots to humanize war[2] has already identified human weaknesses that need to be compensated for.[3] Assistance systems that can process a large quantity of sensor data from robot swarms undoubtedly have an unbeatable advantage over error-prone humans. However, they are only necessary because of these swarms. Whether a particular kind of action has to outperform human capabilities is therefore context-dependent. As long as “high-tech” is the context in which we operate, the decision as to when escalating steps or de-escalating steps should come into play is a technical as well as a strategic one.

If military restraint is interpreted as a weakness by the enemy, then peace-promoting measures will generally lead to escalation, even though this is precisely not their intention. The question we should ask, therefore, is not whether AI-controlled weapons should decide between life and death, but who is actually protected by which behavior.[4] Recent studies show that when it comes to the crunch, AI tends to “choose” escalating scenarios, i.e. it also puts its users in danger.[5] The biggest problem in wargame scenarios[6] is not the decisions themselves, but the fact that the reasons why they were taken remain unclear, or that explanations which may be given for a decision have no logical connection to it. If tendencies toward aggression are problematic in themselves, their non-correlation with comprehensible reasons is completely unacceptable. If the OpenAI GPT-4 basic version comes up with nonsensical explanations or tells fictitious stories, then this is fake news, which we have always known from human war propaganda. From the recipient’s point of view, it is irrelevant whether the fiction is created deliberately or accidentally. In both cases, the audience’s task is either to accept the message or deconstruct it. In this respect, the ethical concern is by no means that AI is too different from human behavior, but rather that it potentiates human behavior and makes critical judgment more necessary than ever.

Do-gooders without spirit?

Researchers at Stanford University found that when wars are planned using artificial intelligence, the AI regularly chooses to use nuclear weapons or next-generation weapons. This is from a study on the future of military planning.[7] Conflicts between real-world countries were simulated with the help of AI models. The AI regularly preferred military escalation, often in combination with unpredictable behavior. Its decision-making was then evaluated in various conflict scenarios. In conclusion, it was shown that AI models justify their decisions with very general statements. For example, the existence of certain weapons alone was often the only factor in deciding to use them. This observation should be reason enough to strictly separate the use of weapons of mass destruction in particular from AI decisions.[8]

However, such observations also provide us with a starting point for a possible intervention. If it is clear that AI has a tendency to escalate, then we need to describe the conditions that lead to this decision. Thus the question of what can contribute to de-escalation in conflicts is not trivial. Indeed, current research shows that AI does not decide to escalate for reasons, but simply because it is possible to do so. Human decisions, by contrast, are made on the basis of weighing up the options for action and for caution. These considerations are accompanied by intuitions, emotions, rationality and strategy – in other words, they are the result of a cluster of factors, which we will refer to as “spirit”.

Max Weber imagines an ideal capitalist economy populated by a few creative entrepreneurs along with many “specialists without spirit”. In a similar way, the ideal military apparatus functions with a few creative people combined with a high level of operational readiness and “spiritless” striking power. Whether the “specialists without spirit” have good or bad intentions is secondary. If we impose ethical, social or even environmental obligations on them and if these requirements are then ticked off a list (as we can observe in diversity management with LGBTQ-compliant HR policies, for example), then they are “do-gooders without spirit”. While they profess morality, in fact they only work through the requirements, without morals. The concrete danger is that these do-gooders without spirit will join forces with creative (armaments) entrepreneurs in a destructive way. As a result, we would get environmentally friendly tanks that roll over the enemy with a low carbon footprint, or sustainably produced munitions that explode with few emissions.

How can we endow artificial intelligence with morality without turning the soldiers involved into specialists without spirit? How can we succeed in controlling autonomous weapons systems with spirit? An essential condition is to realize that while we must digitally embed ethical principles, we cannot delegate moral decisions, because any delegation would entail spiritlessness. If, like Marvin Minsky, we define AI as “the science of making machines do things that would require intelligence if done by men,” then we attribute intelligence to a system as soon as it appears intelligent to us. However, the selection of an algorithm by humans and the selection of data for its training always remain essential to the functioning of such a system. Thus, many small decisions lead to a single decision, albeit that the decisions taken by AI are more consistent than those of humans. Because humans cannot constantly optimize, AI systems are always the better specialists without spirit! This is why algorithms are the better managers, as long as we are dealing purely with (spiritless) processes.

Against this background, we must ask how the soldier’s conscience can remain the locus of ultimate responsibility in interaction with autoregulatory systems.[9] The concept of Innere Führung (officially translated as “leadership development and civic education”) is the enduring feature of the soldierly self-image in our democratic structures. It subordinates soldierly obedience to the individual’s conscience.[10] On this principle, lethal autonomous weapons systems (LAWS) must never operate without ultimate human responsibility.

Ultimate responsibility and the soldier’s conscience

However, if we were only to transfer the established moral principles of Innere Führung into algorithms, we would have reached a state of spiritlessness. Instead, we should respond to the changes in the relationship between means and ends that result from LAWS. Morality is not simply to be seen as the outer face of the military order. Rather, a sense of morality should itself lead us to ask how normative arguments can relate to the operational level when the organization is in part shaped by decisions taken by AI. This is the only way to come to grips with the moral basis on which, in the event of conflict, risks are allowed to be taken that ultimately put even the structure of the organization itself at risk.

Any failure of one’s own to find such a reason for justification, which is always possible, constitutes the limit of the loyalty that can be expected of an individual soldier. The pressure that the military apparatus exerts in this regard is generally only justified if it is aimed at facilitating individual action. Here lies the difference between algorithmic and human control: Individuals can put the freedom in which they live in jeopardy, without losing their dignity. That is why even murderers are not without dignity. But institutions that systematically harm the freedom of others with AI-controlled weapons lose any justification for their existence; they have no right to exist. Thus, the destruction of enemy structures is a morally justified goal if it can eliminate a threat to freedom and lead to an overall increase in freedom. The concept of humanity found in Innere Führung makes it possible to address such dilemmas, because it assumes fallibility on all sides and helps to humanize violence. Since freedom is a prerequisite for the soldier’s assumption of responsibility, safely-shielded freedom is also the yardstick for the use of destructive force.

The responsibility of institutions that use AI must therefore be assessed differently than that of individuals. The decisive factor is not the use (i.e. the type of algorithm), but the different relationships between the subject and the weapon. In addition, of course, the use of digital systems becomes problematic in precisely those situations that cannot possibly be overseen by a human agent due to time constraints and the mass of information to be processed. The speed of reaction makes the use of digitalized weapons necessary. The ethical response at this point cannot be to imagine soldiers in a pre-industrial situation. Rather, we should seek a fundamental and effective ultimate human responsibility in the digital-industrial process that does justice to the subjectivity of the soldier’s conscience. In other words, we should concern ourselves with the relationship between humans and weapons systems, and such a relationship arises through symbolization.

We are familiar with processes of this kind from civilian life: People who used to take their dog for a walk are now themselves walked by robot dogs. They have to leave the house when the algorithm gives the appropriate signal. However, this is not perceived as an unreasonable encroachment on their freedom as long as the symbolization of their own experience is reserved for the subjects, i.e. as long as the process is understood as actively walking a dog. The same applies to the use of weapons. As long as the impression is maintained that the weapons are being wielded – and not that the AI is leading the soldiers – we can justifiably speak of ultimate human responsibility. The transfer of decision-making power therefore takes place covertly. However, it is not an illusion of responsibility that comes into effect here, but an interplay between an immanent assumption of responsibility (veto option) and a transcendental embedding of unobjectionability (trust in technology). We are familiar with this interplay between control and trust from religion (influencing God through prayer despite trusting in God), and politics (formulating a changeable set of policies while being guided by enduring principles). In the same way, a digital Innere Führung should combine possibilities for intervention with trust in the system. What can actually be understood by ultimate responsibility and where it is actually embedded (in programming, in maintenance, in use) always depends on the system and the changing requirements arising from potential enemy systems. If the human agents are influenced in their moral orientation by the systems, then controllability and ultimate responsibility as moral standards are also the result of the technical possibility of being able to dispense with both.

The task of digital peace ethics

Building and securing peace is therefore in any event the result of embedding peace ethics in military technical systems. To put it another way: In an age when conflicts are waged digitally, assisted by AI, peace must also be conceptualized digitally and therefore technically. Although AI tends to escalate, as shown above, there is no longer such a thing as unarmed peace, because even disarmament is only possible today through technological transformation. Indeed, the call to “beat swords into plowshares” was nothing other than a call for technological variation, not for abstinence. The arms race has been set in motion again in recent years. It too is either fueled by technology or interrupted by technology; the latter arises when the cost of a technological contest seems too great for all sides.

Because we are fundamentally tied to digital technology as our dominant form of action in war and peace, the use of AI also does not lead to the alternatives of either unarmed pacifism[11] or armament. Rather, both become a consequence of technology. The theological consequence arises from the position of the World Council of Churches, which has become ever more firmly entrenched since it was formulated in 1948, that “war shall not be according to God’s will”, and the interpretation of digital military and diplomatic action. Viewing peace as a possible consequence of the reality of digital armaments and exploring its real possibilities in view of the potential for escalation – that is the task of digital peace ethics. If AI can tempt us into war due to its embeddedness, then there must also be parameters that give it a de-escalating effect, and these must be defined. Like the robot dogs that “compel” their owners to walk them, AI systems are certainly conceivable that demonstrate “compelling” reasons to de-escalate conflicts and are not diametrically opposed to approaches of radical pacifism.[12] However, these reasons do not arise from AI, but from human options for action, diplomatic capabilities and political constellations. From an ethical point of view, reasons are only ever compelling to the extent that they are actually transparent and comprehensible. Decisions delegated to AI, however, are not made for reasons, but because of causes, with the result that here the act of delegation contains the element of morality.

From the perspective of peace ethics, we are therefore looking for an AI that pushes us to resolve conflicts. In this scenario, the decisive factor is the sanction mechanism. In the case of the dog, the owner’s health insurance premiums could rise if the dog is not walked and the owner gets less exercise as a result. In the case of AI used for military purposes, institutionalized sanctions for the acting governments or states should be considered. The nature of the world of data is such that ever more data results in ever more options for action, and therefore more and more controls are required. As a result, questions of peace ethics also shift to ever higher levels. As the amount of available data increases, we are also increasingly lagging behind when it comes to the optimization of coexistence in terms of peace ethics. As a consequence of technology, peace can only develop in a context of dealing with increasing complexity.

The irrationality of war

Can we at this point take advantage of an otherwise problematic characteristic of surveillance systems? Digital capitalism thrives on the fact that governments and corporations are equally interested in monitoring citizens and influencing their behavior. Nudging gets people to live healthy, sustainable and politically correct lives according to prescribed criteria. The more predictable their behavior, so much the better for government and business. However, the more paternalistic a state’s actions are, the less able it is to protect the privacy of the individual. Nudging therefore aims for voluntary submission, for example granting a tax bonus or a higher pension for good behavior. Homeowners who carry out a desired modernization benefit from public subsidies. The state buys the “voluntariness” of its citizens, but is in danger of losing them as citizens, i.e. as the sovereign source of political power.

If we have no idea how an escalating AI comes to a decision in a specific case, and if the so-called self-learning systems cannot be reconstructed by reverse engineering this decision-making process, then we have nothing left but trust: We have to believe the AI, but in our current situation we can’t, because any uncontrolled escalation obviously also threatens the freedom of its users. Just as the combination of enormous market power and information asymmetry leads to very justified criticism of digital monopolies, the criticism here applies to a military apparatus that – like a digitally out-of-control government – can potentially turn against anyone. Cluster bombs were banned for precisely this reason: not out of compassion for the enemy, but out of fear of self-inflicted harm. It is not morality that leads us to banning weapons, but rather the simple fact that the irrationality of their use could backfire on the attacker. The escalation potential of AI draws our attention to the irrationality of war, which we have almost forgotten in our debates about a rational balance of terror or other strategic considerations.

Conventional peace initiatives appeal to reason (including that of enemies), which is why they appear to have exhausted themselves in the face of the various forms of hybrid warfare. It has been observed that AI models resort to an early escalation regardless of the previously defined scenarios and also, at a certain point in the period under consideration, escalate without resorting to reasons. This tendency to escalate without reasons makes it improbable that appealing to the enemy’s reason will be adequate in digitally waged conflicts. Not because of a negative anthropology, but because of the incomprehensible actions of AI, it seems better to assume unreasonableness and to base both the ethical conclusions and a possible assessment of the impact of the technology on this assumption.

Black boxes can be classed as irrational, even if rational processes may take place inside them. If autonomous agents regularly choose to escalate when taking decisions in high-stakes situations, then the question is whether such agents should be treated in the same way as cluster bombs. If there is no reliably predictable pattern behind the escalation, then no counter-strategy can be formulated. Given the potentially devastating consequences, these agents would be unacceptable (to all sides).

The crucial point is that the context described here does not encourage the military to behave in a particular way. It does not induce desired behavior in anticipatory obedience – for example because you can never be sure whether you are being watched. When potentially escalating AI is used, soldierly sovereignty as part of Innere Führung also falls into dependency on algorithms. Yet the possibilities for a reverse surveillance of algorithms are very limited due to their black box character. Therefore, for every new AI, a conformity assessment is needed to test its ethical and technical robustness and to classify a risk. Nevertheless, there is a threat of social powerlessness in the face of the inescapable power of an AI-controlled government and military system. The only way out will be to put ourselves in the position of being able to justifiably override at any time all the rules that military personnel follow when interacting with an AI. Anyone who can no longer do this would have ceased to be a responsible human being. Keeping human intelligence receptive to peace initiatives is an important task, precisely because the technological paradigm is unassailable.

Enforced renunciation of violence is impossible

Finally, two normative principles can be identified to guide the development and deployment of AI-controlled systems. Firstly, human-technology interaction is determined by factors that arise from our self-image as human beings. This anthropological dimension of the soldier’s conscience should be given a particularly strong emphasis in basic, advanced and further training. Secondly, the goods that are protected or created through the use of autonomous weapons are not neutral. The moral side of warfare and peacekeeping requires thorough reflection, inasmuch as morality is now always developing in tandem with digital systems. Peace is becoming an AI scenario.

If autonomous systems were ever able to act “more rationally” than humans, as technological optimism would suggest, then the broader question would be whether they would then have to fundamentally reject war due to its irrational structure. The fact that people want peace but promote war through their actions is not a new insight. A new development would be for belligerent powers to ensure that a rationally acting AI cannot lead them into a peace that humans did not want. In particular, a digital monopoly on the use of force that gives preference to not doing harm would be self-defeating. If algorithms were able to humanize conflicts through a renunciation of violence, the result would not even be recognized by the human losers who are willing to engage in conflict.[13]

Freedom remains at the center of our concerns: freedom which is delegated to systems by the military, and whose endangerment makes everyone into a potential victim. This leads to a familiar insight: War marks the limit of reason, because it takes away our freedom of action. Therefore, the limit to the peacefulness of digital systems is not the escalation potential of AI – however enormous that may be. It is the extent to which free people themselves desire peace. Consequently, the dangers posed by intelligent weapons systems cannot be mitigated by technological advances, but only by political measures such as international regulations and collective bans. However, these measures remain subject to the technological paradigm, which means we should not entertain the illusion that our will for peace can develop beyond what is technically possible. We should not ignore the influence of technology on our morals, and equally we should avoid the temptation of expecting AI itself to provide solutions to our moral problems.

 


[1] Kahn, H. (2010): On Escalation: Metaphors and Scenarios. Abingdon, New York; cf. Patchen, M. (1987): The escalation of international conflicts. In: Sociological Focus 20, pp. 95-110.

[2] Cf. Dickow, M. (2015): Robotik – Ein Game-Changer für Militär und Sicherheitspolitik? Berlin. https://www.swp-berlin.org/publications/products/studien/2015_S14_dkw.pdf (All internet references accessed April 25, 2024).

[3] Arkin, R. (2015): The Case for Ethical Autonomy in Unmanned Systems. In: Allenby, B. R. (ed.): The Applied Ethics of Emerging Military and Security Technologies. Farnham/UK, pp. 285-294.

[4] Reuter, H.-R. (2014): Wen schützen Kampfdrohnen? In: Zeitschrift für Evangelische Ethik 58, pp. 163-167.

[5] Rivera, J. P. et al. (2024): Escalation Risks from Language Models in Military and Diplomatic Decision-Making. arXiv:2401.03408.

[6] Meta Fundamental AI Research Diplomacy Team et al. (2022): Human-level play in the game of diplomacy by combining language models with strategic reasoning. In: Science 378 (6624), pp. 1067-1074.

[7] Mukobi, G. et al. (2023): Assessing Risks of Using Autonomous Language Models in Military and Diplomatic Planning, Multi-Agent Security Workshop@NeurIPS'23. openreview.net/forum.

[8] Andersen, R. (2023): Never Give Artificial Intelligence The Nuclear Codes. https://www.theatlantic.com/magazine/archive/2023/06/ai-warfare-nuclear-weapons-strike/673780/?utm_source=copy-link&utm_medium=social&utm_campaign=share.

[9] Högl, E. and Jüngst, S. (2022): Innere Führung und Künstliche Intelligenz zusammen denken und gestalten. Konrad-Adenauer-Stiftung, Berlin. www.kas.de/documents/252038/16166715/Innere+F%C3%BChrung+und+K%C3%BCnstliche+Intelligenz+zusammen+denken+und+gestalten.pdf/ba88832f-5002-82b1-12d8-a588828a7b03.

[10] Cf. the new edition of von Baudissin’s writings with introduction and edited by Claus von Rosen: Baudissin, Wolf Graf von (2014): Grundwert. Frieden in Politik – Strategie – Führung von Streitkräften. Berlin; Dörfler-Dierken, A. (2019): „Reformation“ im Militär. Baudissin, die Innere Führung und die westdeutsche Sicherheitspolitik. In: by the same author (ed.): Reformation und Militär. Wege und Irrwege in fünf Jahrhunderten. Göttingen, pp. 267-280.

[11] Cf. Hofheinz, M. and Lienemann. W. (2019): Frieden und Pazifismus. In: Gießmann, H. and  Rinke, B. (eds.): Handbuch Frieden. Wiesbaden, pp. 571-580.

[12] Hofheinz, M. (2017): Radikaler Pazifismus. In: Werkner, I.-J. and Ebeling, K. (eds.): Handbuch Friedensethik. Wiesbaden, pp. 413-431.

[13] Schwarke, C. (2017): Ungleichheit und Freiheit. Ethische Fragen der Digitalisierung. In: Zeitschrift für Evangelische Ethik 61, pp. 210-221, p. 219.

Summary

Axel Siegemund

Axel Siegemund is an engineer and theologian whose research focuses on environmental ethics, the ethics of technology, ecumenism, digitalization and development cooperation. His book“Grenzziehungen in Industrie- und Biotechnik. Transzendenz und Sinnbehauptungen technologischer Modernisierung in Asien und Europa” (Baden-Baden: Nomos 2022) won the 2023 Hans-Lilje-Stiftung prize.


Download PDF here

All articles in this issue

Advocating for A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead
Catherine Connolly
Autonomous Weapons Systems – Current International Discussions
Andreas Bilgeri
Digital Escalation Potential: How Does AI Operate at the Limits of Reason?
Axel Siegemund
AI for the Armed Forces Does not Need a Special Morality! A brief argument concerning the regulation of autonomous weapons systems
Erny Gillen
Human Dignity and “Autonomous” Robotics: What is the Problem?
Bernhard Koch
Burden of Proof in the Autonomous Weapons Debate – Why Ban Advocates Have Not Met It (Yet)
Maciek Zając
Reliability Standards for (Autonomous) Weapons: The Enduring Relevance of Humans
Nathan Wood
Is There a Right to Use Military Force Below the Threshold of War? Emerging Technologies and the Debate Surrounding jus ad vim
Bernhard Koch
“Meaningful Human Control” and Complex Human-Machine Assemblages – On the Limits of Ethical AI Principles in the Context of Autonomous Weapons Systems
Jens Hälterlein, Jutta Weber
Humanity in War? The Importance of Meaningful Human Control for the Regulation of Autonomous Weapons Systems
Susanne Beck, Schirin Barlag
Meaningful Human Control of Autonomous Systems
Daniel Giffhorn, Reinhard Gerndt
AI in the Age of Distrust
Henning Lahmann

Specials

Ansgar Rieks, Wolfgang Koch ChatGPT