Editorial
Issue 2024/01
AI and Autonomy in Weapons: War and Conflict out of Control?
Dear readers, the first issue of “Ethics and Armed Forces” was published ten years ago. We are therefore revisiting the topic of that issue – “Anonymous Killing by new Technologies? The Soldier between Conscience and Machine”. In addition to the happy occasion, there is another reason for this decision: Even if not every argument in the debate about so-called autonomous weapon systems (AWS) is new, there have been significant developments, especially in recent times.
On the one hand, this applies to technological progress, not only in “civilian” applications such as ChatGPT, but also on the unfortunately numerous battlefields of this world. In Ukraine, (semi-)autonomous drones are now allegedly being produced and deployed. Reports about an Israeli AI-supported target selection system called “Lavender” seem to underpin fears about the rather thoughtless use of algorithmic recommendations. This makes it clear that it is not just about autonomously acting weapon systems, but also about the fundamental “cooperation” between humans and algorithmic applications and the resulting ethical, legal, armaments and security policy issues. The Bundeswehr is also working intensively on the potential uses of AI, and not just as part of the Future Combat Air System (FCAS) project.
On the other hand, there is new momentum in the efforts to reach a binding international agreement on autonomous weapons. To kick off this issue, Catherine Connolly from Stop Killer Robots and Austrian disarmament expert Andreas Bilgeri therefore refer to the latest international conferences on AWS and the first resolution at the UN General Assembly, which could break the deadlock in regulatory efforts to date.
The following articles also address various aspects and focal points of the AWS debate. For example, Erny Gillen from Luxembourg, where an international conference on AWS was held last year, outlines a way to regulate military AI in accordance with UNESCO principles. Bernhard Koch explores the fundamental question of whether “algorithmic killing” violates human dignity. Polish philosopher Maciek Zając, on the other hand, calls for an ethics that is more strongly dedicated to the licensing and application criteria of specific systems.
A central concept in the debate is “meaningful human control”: the idea of a controllability of artificially intelligent systems that is not limited to superficial fictitious control. We are therefore pleased to welcome contributions from researchers from three sub-projects of the interdisciplinary network of the same name, including Jutta Weber and Jens Hälterlein, who use a specific FCAS application scenario to investigate the limits of autonomy and effective control in complex human-machine systems. The article by Henning Lahmann, which concludes the main section, questions whether common assumptions and strategies in the field of “cognitive warfare” are tenable.
For this issue's special, it made sense to ask an AI what the future of warfare looks like and whether humans are still relevant. The opinion of our human interview partners, retired General Ansgar Rieks and Wolfgang Koch from the Fraunhofer Institute for Communication, Information Processing and Ergonomics (FKIE), was clear: the human soldier will always play a decisive role - whether they are still in the cockpit of a fighter jet or not.
Without claiming to be exhaustive, our aim with this comprehensive anniversary issue is to shed light on the many challenges associated with AI and autonomy. We would like to express our sincere thanks to everyone who has contributed to this and previous issues – from the founding team of “Ethics and Armed Forces” to long-standing supporters, authors, editors and advisors, translators and technical support.
Read the magazine