Skip to main content

“If you do without superior technologies, you forgo the ability to act in an ethically responsible way”

Together with European partner countries, Germany is developing the Future Combat Air System (FCAS), a networked air combat system consisting of various components. How can we ensure human control and responsibility with such weapons of the future? Or will machines replace humans in the military sooner or later? “Ethics and Armed Forces” spoke to two experts about autonomy and automation, ethics and consequences for leadership.

General Rieks, Professor Koch, to begin with, would you be so kind as to provide our readers a few key facts about FCAS?

General (ret.) Dr. Ansgar Rieks: FCAS is a very complex system consisting of old and new aircraft and other components. At its core, it consists of what’s called a Next-Generation Weapon System: a combat aircraft (command fighter) coupled with drones (remote carriers). Additional aircraft and other systems are connected to it. This is referred to as a “system of systems”, which is controlled by an Air Combat Cloud and the associated software along with the pilot of the command fighter. Through special funding (Sondervermögen), the Bundeswehr is currently modernizing to meet the latest technological standards. I believe FCAS represents a necessary leap forward, positioning us for the future.

Prof. Dr. Wolfgang Koch: Such a complex system of systems must be able to do what it is supposed to do. In this respect, it requires support for the natural human ability to perceive, act and decide in order to actually control it.

What is the time frame for the project?

A. R.: We are aiming for 2040. We must have taken concrete development steps by then. A fully comprehensive prototype is scheduled for completion by 2030. The decision to develop FCAS was initially a joint effort between France and Germany, with Spain later joining the project. Perhaps more countries will join in the future.

Why do you think Europe needs such a weapons system? Can you provide examples from current developments or ongoing conflicts?

A. R.: We don’t need a technology like artificial intelligence just because it exists. This is about the sheer necessity of fulfilling missions in an increasingly complex environment, especially in the conduct of military operations. Our potential enemies, and we know now who they might be, use such technologies and gain clear advantages as a result. They are faster, decisions are made in the shortest possible time. Secondly, they have an excellent situational overview. Thirdly, sensor and effector technologies are constantly evolving. Outdated aircraft cannot penetrate a modern air defense system.

To succeed in modern battles or operations, systemic thinking is essential; it’s not just about aircraft against aircraft, but the deployment of an entire integrated system. The second approach, not necessarily tied to FCAS, involves multi-domain operations encompassing land, air, sea, cyber and space; some also include the human domain. This is even more complex because it requires integrating and managing various systems of systems within a network.

W. K.: From what we know, Chinese and Russian military research is extremely strong. However, I am surprised about the Ukraine war. It seems to me that it shows everything from 1914 to the present. I believed for a long time that the use of modern algorithms would clear the fog of war in some way and goals could be achieved with surgical precision. But the transparent battlefield in Ukraine is lethal. In this respect, the question of the responsible design of modern technologies is acute. And if you want to act responsibly and ethically, you need superior technologies. If you do without them, you forgo the ability to act in an ethically responsible way.

A. R.: My main focus is also on ethics through technology. The fact that we now have more precise situational overviews and can be extremely accurate allows us to adhere to ethical criteria more effectively than in the past. Ethicists don’t always like to hear this, as they often reject advancements in weapons technology. But I believe that this is changing, slowly but surely.

I think two things are important in the case of Ukraine. First, from the outset Russia completely underestimated Ukraine’s will as well its capability to defend itself. Second, we can see that systems of systems and multi-domain operations do not yet play a major role. However, we can see that Russian technology is generally more capable than what its operators are able to get out of it. And space also plays a big role...

You mean Starlink, the satellite-based internet service?

A. R.: …Exactly, and the fact that drones are obviously a game changer. Ukraine is planning to build one million drones itself. How do you deal with that? What about drone swarms? There are many other areas that need to be monitored.

Let’s stay with FCAS: The International Committee of the Red Cross defines autonomous weapons systems as weapons that can perform “critical functions” without human intervention, namely selecting and engaging targets. How would you categorize FCAS and its support drones?

W. K.: Autonomy is an unfortunate term, just like artificial intelligence. Would you call a coffee machine autonomous because it makes filter coffee without human intervention? Such terms arouse public imagination and fear. I prefer to talk about automation. Full automation is perhaps autonomy, but there is also partial automation. Under no circumstances do machines build themselves like in a Hollywood movie so that we can't control them. Humans outsource their natural abilities to machines, including perception, reason and decision-making. The machine can’t do anything that we can’t do. But it can do it very quickly, and it can scale it up.  

But setting aside terms and definitions, people fear these technologies because they wonder: Do they actually do what we want them to?

W. K.: We have to build them so that they do.

And that is possible?

W. K.: Isn't it the case, Mr. General, that a military leader doesn't want to use a drone or a similar device if he doesn't know exactly what it does?

A. R.: Of course, our primary interest is always in achieving the desired effect. Regarding the question of autonomy or automation, I have a straightforward mathematical definition: 100 percent automation equals autonomy. 98 percent is just highly automated, or whatever you choose to call it.

So it’s a continuum?

A. R.: Yes, and if it’s not 100 percent, a human decision is always involved. There will never be an autonomous system without human influence in this sense. Deploying something into the dynamic battlefield that I can’t control, and thus removing all flexibility, even for just a few seconds or minutes, doesn’t make operational sense – even if you were not interested in any ethical considerations. Ethical and operations management considerations go hand in hand here.

W. K.: Another example: a frigate on the high seas that is attacked by sea skimmers (anti-ship missiles flying very close to the water surface, editor's note). Because the threat is only seen very late, the threat evaluation and weapon assignment systems run fully automatically. But even these are switched on deliberately, which is the responsibility of the commander; he won't do it when he arrives during Kieler Woche(a maritime event in the city of Kiel in the North of Germany, editor’s note). In addition, such a system has to be parameterized. This means that the decisions are made further back, at the level of mission planning, program development and research. The responsibility never lies with the machine, but with the person who uses, builds and designs it.

What does the key term “meaningful human control” mean to you? Do you consider it necessary?

W. K.: For me, this is one of those terms from political science that can be interpreted just as flexibly as any legal term.

A. R.: While we do have rather a lot of terms, it does steer us away from banning technology. A group of German scientists wanted to exclude AI from weapons systems. And there is always an opposing side that advocates for using technology without limits. Meaningful human control integrates both perspectives responsibly.

Also from the perspective that it doesn’t make military sense to relinquish control completely?

A. R.: It would be disastrous for a soldier to surrender their responsibility to anyone! But that doesn’t mean tasks aren’t carried out by someone else or automatically, even to a large extent. Take the train you took to come to this interview today. Do you think train drivers still make calculations? No, they rely on automation within the train and throughout the entire rail network.

But we have defined the requirements for such systems to ensure their safety and the safety of passengers.

W. K.: That’s a licensing problem, of course.

A. R.: And for things to be ethically/morally impeccable, an ethicist must address questions of technology and military operations, and say something like: I would take this or that as a given, add a criterion here or there... And if those criteria are met, then that would constitute meaningful human control. But the operator must be given rules for dealing with the AI that supports him so that he can be confident in his actions.

Given the above, how would we handle the following difficult cases: The operator or military leader follows AI-generated suggestions, which leads to negative results. Or he chooses to go against the AI recommendations, which also results in a negative outcome. Should the operator or military leader always have the freedom to ignore the system’s recommendation? Or should the system be designed to allow it to override his decision?

A. R.: Allow me to make a preliminary remark. I often hear the following argument: Humans cannot keep up with the speed of an AI system or its comprehensive data evaluation. But since we want the human-in-the-loop, we must not use AI or algorithms. I think that’s wrong. It’s precisely because we do this that people can stay in the loop without shirking responsibility. If we were to ban AI, our opponents would use it anyway, and we wouldn’t be successful. I would have fulfilled my responsibility one hundred percent, but I would lose every conflict. And we need to take another factor into account – data!

We have already successfully tested AI in operational planning within the Germain air force. It was trained with exercises and simulations, and the operators confirmed that they would always have made the same choice as the AI’s suggestion. That got me thinking, because an advisory AI must have a certain amount of freedom; it must be broadly based, both regarding training data and evaluation. Ant it will sometimes make unexpected suggestions because it has continued to learn.

So that it doesn’t just confirm the human’s decision?

A. R.: That doesn’t help at all. It isn’t supposed to just parrot humans; it’s supposed to provide proper advice. Now let’s turn to the operator in front of the screen: In the first case, they will be asked: Are you not experienced enough to know that you can’t trust such an AI system? And if they decide to trust their experience instead, we’re saying: You have this great system, but you’re not using it! To get out of this lose-lose situation, we need to establish a few things. First, the AI should indicate how safe it considers the situation to be. Second, we must not assume that AI will no longer make any mistakes. We must also transfer our culture of error management to the systems. And what I personally always consider important, regardless of the fact that the responsibility lies with humans, is that you allow the machine to operate to a certain extent in its optimal capacity, but as a human being, you can always override its decision.

An opt-out solution of sorts… Elsewhere, you talked about the emergency landing of a passenger plane on the Hudson River in 2009.

A. R.: That’s a good example. The pilot decided to land on water in this emergency, even though the machine would almost certainly have rejected this. There are other examples, such as the attack on the Christmas market at Breitscheidplatz in Berlin. The driver wanted to continue driving, but the truck stopped after the collision, which you certainly consider to be ethically good and right. You have to consider all these cases.

W. K.: Ethicists love to provoke dilemmas, like the famous thought experiment with the approaching trolley. Someone stands at the switch and either kills 50 track workers or five. An engineer will say: A system that forces me into such a dilemma is badly designed. There must be an emergency brake! There will always be dilemmas; Ethically Aligned Engineering tries to uncover them and mitigate them by appropriate technical design.

But as you say, not all eventualities can be foreseen.

W. K.: That's why you have to be able to trust an AI. It will always have something of a black box, but with explainable AI you can at least turn it into a "gray box". Perhaps the relationship between a hunter and his hunting dog is a metaphor that helps to make things clear. Together they are much more efficient than alone. The hunter does not know exactly what is going on in the animal’s soul. But if you know and love your dog, you know exactly how to use it. Of course, the dog needs to be trained. Even for a certified AI, you have to develop a feeling for it. In this respect, training plays a very important role. But when artificial intelligence meets natural stupidity, things get bad.

A. R.: The aircraft accident in Überlingen in 2002 is an interesting case. That was when a Russian transport plane collided with an American plane coming from Italy. The anti-collision systems of the two aircraft would have prevented the crash if the human controller, who was under enormous pressure, had not intervened. This resulted in us now relying on the technology and, if in doubt, placing people in second place.

There is also a legal aspect to the question of responsibility. Who should be held liable for unforeseen damage in FCAS – the pilot in the cockpit?

A. R.: Let’s take a simpler example. A civil aircraft has complex technology – there are always unpredictable phenomena – and it is integrated into an air traffic system. During my time in service, I knew of incidents where it was extremely difficult to pin responsibility on a single point. A technical defect might coincide with an operational failure, sometimes leading to an accident. It’s a tragic piece of bad luck, you could say, but it doesn’t mean we can simply dismiss the question of responsibility. Do you know how this was solved in aviation?

How?

A. R.: You create a high degree of reliability that is technologically tried and tested; in the case of aircraft, this means an extremely low likelihood of failure of parts critical to flight safety – and the rest is insured. If the machine crashes, everyone is covered by the insurance. Without such a system in place, you wouldn’t be able to fly, given all these responsibilities. A professor from Canada, Yoshua Bengio, recently suggested that AI should be approved in a similar manner. I think that would be exciting. Things become difficult with systems of systems, meaning human-machine interfaces in a cascading composition. But we’re not there yet.

W. K.: There are two laws that have a certain parallelism. One is the Aviation Security Act, which was passed by the Bundestag in 2006 and overturned by the Federal Constitutional Court. In the event of a terrorist attack with an aircraft, the legislator did not want to leave the responsibility for shooting down the aircraft with the pilot, but rather to help him by providing a set of rules. If I understand this correctly, the Federal Constitutional Court is expressing a ban on algorithmization.

A. R.: I sometimes wonder whether a clearly measurable algorithm, developed with input from many experts, might be better than leaving the responsibility to a sometimes volatile single person.

W. K.: Then came the Covid crisis and the associated triage due to a shortage of intensive care beds. The triage law is also an algorithm, but the Constitutional Court did not challenge it. For me, this means that our society is not at peace with itself.

But can AI reliably represent ethics and international legal norms, such as the principle of discrimination?

W.K.: There is always something that must be complied with, the law; not just international law, but also the Rules of Engagement. I think you can already determine algorithmically what is compliant or non-compliant. Then, on the one hand, there is the mission that must be fulfilled and, on the other hand, there is the virtuous soldier. For him, you can generate situation pictures, show decision options and probable consequences. It is much more difficult to support him in developing a feeling for the situation and remaining morally intact.

A. R.: I would like to pass this question back to you. You can ask the ethicist: Are you specific enough that I can learn from you what’s right and what’s wrong? If the answer is that it cannot be put in such a clear term, then a technician can’t program anything either. I believe that ethics needs to be a little more pragmatic. To put it bluntly, a technician can incorporate ethical considerations into an algorithm as long as he is given clear criteria.

The only question is: Does ethics work in this way?

A. R.: In any case, what I wouldn’t go along with is the claim that ethics is not there to assist with decision making. Of course, it’s not always spot on, but there has to be some kind of target circle, otherwise we don’t need it. The same is true for some Church statements. We still think that we have decades to consider a technological development. But we have to integrate ethical considerations into FCAS or drones now – what’s allowed and what’s not from a Christian perspective? We need an answer to this before too long.

The development of FCAS is being supported by a working group on technical responsibility. What are their tasks?

W. K.: This working group was set up to think about the system-related consequences of an ethical consideration. For the first time, the development of such a system is being accompanied from the very beginning. How do we design it so that we control it and not the system us? After all, we want to make good and effective use of it. We also need to consider requirements such as approvability from the outset. And we need to involve society in this discussion.

How do you proceed in the working group?

W. K.: We have considered the following: A mission must be accomplished in a dangerous environment. There is someone who perceives, decides and acts, that is always someone. In the Observe and Orient phases, a picture of the situation is created, then there is the Decide and Act phases, and finally the Assessment; then this so-called OODAA loop starts all over again. The problem, however, is that it is run through extremely quickly, so that the person up there who has to perceive, decide and act needs machine support. We have examined the OODAA loops for ethically critical points. Observe and Orient are perhaps less critical, but we still need to find out whether the situational overview the AI shows is actually true. At the moment, we are investigating the Decide and Act phases by provoking dilemma situations and looking for ways to defuse them technologically. To do this, the air force has to be involved and provide scenarios.

And the human is still the decisive factor in every case?

A. R.: Even within the system of systems with drones and aircraft, FCAS still essentially consists of a manned aircraft. In today’s electronic warfare environment, vulnerable to cyber threats, you cannot always ensure the data connection to a system. In situations that a machine may not be able to recognize or process, the pilot’s experience remains crucial. He is the human-in-the-loop, he makes the ultimate decision. As I said before, I would program the machine, including its ethical programming, so that it operates autonomously from scratch to a large extent, based on high-quality situation reports and recommendations. But the person in the cockpit can always say: I still want this, I don’t want that.

You have developed the so-called Ethical AI Demonstrator in the working group. What is it, and how do you use it?

W.K.: The AI demonstrator can give military operators an idea of what AI can do in a specific military scenario, what it can just about do and what it cannot do. And that AI can also be disrupted. Explainable AI is an exaggerated word, but the soldier should learn how to deal with these systems. Another aspect is that we show examples of the support AI can provide, for example by quickly calculating the consequences of possible courses of action. In dealing with this, the operator gains confidence and we discover where we need to pay particular attention. In this respect, the Ethical AI Demonstrator is an Ethical AI Requirement Definator in the triangle of air force, research and industry.

A.R.: The aim is not to restrict functionality but to significantly reduce abuse or morally reprehensible behavior. If this succeeds, I hope that ethicists, technicians and soldiers will all be happy, each from their own viewpoint.

France and Germany recently signed the preliminary agreement for the Main Ground Combat System (MGCS). Will automation affect all branches of the armed forces sooner or later?

W. K.: Yes, that's FCAS on the ground, so to speak. And there are similar developments in the navy.

How is increasing mechanization and automation changing the military profession and the image of the soldier? Will the Bundeswehr only need computer scientists in the future?

A. R.: These are legitimate questions that call for a detailed answer. But it is a development that has been progressing for generations. Ask a soldier who was in the armed forces in 1960 whether warfare was not dehumanized by the first computers.

So there is no “third revolution in warfare”?

A. R.: I believe that we are constantly evolving. Operations command in a national and alliance defense scenario involves different tasks than deployment in Mali. In the future, we will also be supported by AI, enabling us to prevail against a well-positioned opponent. But I definitely believe that we need more people from universities and research institutes who have an understanding of technology and its potential, and who can interface with the armed forces. But we also still need people who understand how to conduct operations; you can’t equate the two. Do you know how the satnav system in your car works?

Probably not…

A. R.: But you know how to use it.

And as far as Innere Führung is concerned: Will ever-greater automation not significantly change the concept of the conscience-led individual?

A. R.: I’ve often heard people say that the whole world around us is changing, but Innere Führung(officially translated as “leadership development and civic education”, editor’s note) remains the same. This is of course true at the level of integrating our armed forces into democracy. What remains constant are law and order, mission command and conscience, and the Bundeswehr will continue to be a parliamentary army. Leadership is changing in other respects, and very much so: AI now allows the brigade commander to see how the squad is maneuvering around the tree – it is moving to the right, but he knows that it would be better to turn to the left. Should he not intervene due to mission command? Or, if the AI runs through these decision cycles in eight minutes instead of eight hours and is still linked to domains such as land and sea, then the network of responsibilities must be completely restructured, both horizontally and vertically. It’s an extremely complex task that will also change leadership. If, as mentioned at the beginning, FCAS is introduced, a completely new generation of weapon systems, we need to have answered all these questions, both with regard to ethics and operations.

W. K.: There is another aspect that worries me. You know the famous Böckenförde dictum: a free and democratic society is based on conditions that it cannot guarantee itself. The fathers of Innere Führung, such as General Baudissin, were personalities who breathed the Christian spirit through and through. Are Böckenfördeʼs prerequisites still in place if perhaps 50 percent of soldiers are non-denominational? What will our ethics be? This is also a task for ethical education in the armed forces.

Finally, a specific case: In the US, AI recently completed a dogfight exercise. They do emphasize that they don’t want to eliminate pilots. But will AI not replace humans at some point?

A. R.: I’m not saying that I always see people at the front line, for example, in the cockpit. I remember the fact that in 1139, Pope Innocent II banned the crossbow because it was not considered chivalrous to shoot from a distance. Nevertheless, a battlefield with ever greater distances has developed since then. I was still trained on the Phantom aircraft. When we installed the first software for the further development of attack procedures, it was a revolution! Today we think it’s completely normal. So when you talk about dehumanization, with only machines fighting machines, this debate has been ongoing since 1139.

If you have the technology, you will always operate at a distance because it puts people at less risk, especially if it is more precise and more ethical at the same time. That’s why it’s completely normal for the Americans to conduct a dogfight exercise using AI. We will continue to have operational and tactical responsibilities, albeit within a new structure. We will always need soldiers.

W. K.: Those dogfights are similar to games, and AI can do this very well. Some time ago, an AI won with an unexpected move against an experienced grandmaster in Go. Sometimes it is said that we build a box with rules around the AI in case it makes nonsense. In this case, you would have gambled away the victory! Especially with drones and swarms of drones, I think you have to let the AI off the leash.

A. R.: However, using a non-deterministic system goes against all previous certification principles in aviation. The only way out of this dilemma is to let an AI system do its work and build such a limiting box around it. Wherever the AI does something it shouldn’t, it’s stopped. But in order to benefit from its potential, we should be able to make use of its excellence! It would be disastrous if everyone else did this except us.

General Rieks, Professor Koch, thank you very much for the detailed interview.

Questions by Rüdiger Frank. Contributors: Kristina Tonn and Heinrich Dierkes.

 

More information on FCAS and the Technical Responsibility Working Group:
https://www.fcas-forum.eu/

 

Ansgar Rieks

Lieutenant General (ret.) Dr. Ansgar Rieks joined the Bundeswehr in 1978. From 2014 to 2017 he was the first Chief of the German Military Aviation Authority, and subsequently, until his retirement in 2023, he served as Deputy Chief of the German Air Force. Prior to this he held various positions in Germany and abroad.

Wolfgang Koch

Professor Wolfgang Koch is professor of Computer Science at the University of Bonn, where he teaches and does research. As chief scientist at the Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE, he is also driving forward research and technology projects for the digital transformation of the armed forces.


Download PDF here

All articles in this issue

Advocating for A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead
Catherine Connolly
Autonomous Weapons Systems – Current International Discussions
Andreas Bilgeri
Digital Escalation Potential: How Does AI Operate at the Limits of Reason?
Axel Siegemund
AI for the Armed Forces Does not Need a Special Morality! A brief argument concerning the regulation of autonomous weapons systems
Erny Gillen
Human Dignity and “Autonomous” Robotics: What is the Problem?
Bernhard Koch
Burden of Proof in the Autonomous Weapons Debate – Why Ban Advocates Have Not Met It (Yet)
Maciek Zając
Reliability Standards for (Autonomous) Weapons: The Enduring Relevance of Humans
Nathan Wood
Is There a Right to Use Military Force Below the Threshold of War? Emerging Technologies and the Debate Surrounding jus ad vim
Bernhard Koch
“Meaningful Human Control” and Complex Human-Machine Assemblages – On the Limits of Ethical AI Principles in the Context of Autonomous Weapons Systems
Jens Hälterlein, Jutta Weber
Humanity in War? The Importance of Meaningful Human Control for the Regulation of Autonomous Weapons Systems
Susanne Beck, Schirin Barlag
Meaningful Human Control of Autonomous Systems
Daniel Giffhorn, Reinhard Gerndt
AI in the Age of Distrust
Henning Lahmann

Specials

Ansgar Rieks, Wolfgang Koch
ChatGPT