Skip to main content

Human Dignity and “Autonomous” Robotics: What is the Problem?

Human dignity: Material or formal?

In applied ethics, including political ethics, rigid dualism can be dangerous as it overlooks the nuances of reality or disregards gradual distinctions. However, dualism can serve as a useful heuristic. A philosophical distinction that could substantially advance many debates, if heeded, is the one between formality and materiality. The complex concept of human dignity can also be more clearly defined through this distinction: Human dignity can be understood in a formal sense, asserting that it applies to all individuals without defining its substantive content; however, it can also be understood materially, that is, defined in terms of content. Material definitions often rest on numerous assumptions. When we assert that human dignity is rooted in humans being created in the image of God, we presuppose both the existence of God and a belief in divine creation. It is unlikely that everyone will agree with both premises, even those who acknowledge the formal existence of human dignity. It is evidently widely accepted that human beings possess a unique dignity. However, some thinkers disagree, arguing that the notion of dignity exclusive to humans represents an anthropocentric view that ought to be overcome in ethical discourse. Although humans differ from animals in many respects, they share even more similarities. According to this view, these different characteristics do not justify the radical thesis that humans possess a unique dignity.

For some, the thesis of a unique human dignity is challenged by the notable characteristics of animals, such as their cognitive abilities and capacity to feel pain. Moreover, for some philosophers, the significant capabilities of modern artificial intelligence systems have made it even more difficult to argue for a uniquely human dignity. The idea that machines can be considered authors of their “actions” and, consequently, held morally accountable for them is currently under debate. Such accountability would mean that machines possess something akin to what some consider a defining trait of human dignity.

Empiricism and “morality”

For the sake of further discussion, let’s assume we agree on the formal concept of human dignity: humans possess human dignity, which has both descriptive and normative significance, allowing it to serve as a foundation for normative conclusions. This raises the question of what human dignity consists of and whether the normative conclusions drawn from it differ when its substantive definition changes. A second dualism now becomes important, one that is fundamental to ethical discussion as a whole: the dualism between the empirical characteristics of a morally qualified “object,” such as an action or a person, and the “moral” (“sittlich” in Kant’s terms) quality, which is not empirically discernible. The term “moral” itself is not straightforward; Perhaps we can simply state, without much elaboration, that “morally good“ refers to what is demanded when something is deemed “good” in ethical deliberation. With this in mind, ethics determines what is “morally good,” which is not necessarily identical to what is described as “morally good” in practical morality. The morality practiced and advocated in everyday life can be mistaken about what is “morally right”. It is not always possible to entirely avoid this error. It primarily depends on our claim. Often, we lack definite clarity about the moral quality of an action, and even more so of a person, but we understand what we are calling for when we speak of moral quality. This difference can be easily explained using actions: the same physical event can be judged very differently when considered as an action and thus morally. Adam points a gun at Brenda. Is he doing this to rob her, or to stop her from attacking Christopher? Empirically, there may be no difference, but different intentions transform the same physical act into different actions (according to a philosophical theory of action), which are then judged differently. However, it is not possible to directly identify the moral characteristics of an action empirically; instead, we use certain contextual clues to determine the intention. It is not inconceivable that we made a grave error: we assumed it was an attack, but in reality, based on the actor’s intention, it was a rescue.

Immanuel Kant distinguishes between the homo noumenon and the homo phaenomenon[1]. A person can appear to us phenomenally, that is, through experience (empirically), but this appearance is not definitive proof of what this person truly is in the “trans-empirical” (beyond personal experience) moral domain. Someone may appear to be a good person, yet act with intentions that, if known, would reveal they are not truly good. The same applies in reverse: someone might be seen as morally corrupt and face ostracism and shame, but their good intentions go unrecognized, leading to an unjust judgment.

Is human dignity an attribute of the homo noumenon or the homo phaenomenon? One of the primary difficulties with the concept of dignity in ethics seems to lie in the fact that it is sometimes applied to the “moral” person and sometimes to the empirical person, that is, the individual we perceive through our senses. For classical thinkers, particularly those from the Socratic tradition, a person could only deprive themselves of their dignity through their own actions. What others do to them, what they merely endure, does not affect their true self. Only that which originates from a person, their intentions and decisions, shapes their true nature, the way in which (quale) they are, their quality, and thereby their dignity. However, in other respects, this appears rather unsatisfactory: when prisoners in Abu Ghraib are first sexually aroused and then exposed in photographs, we understand this as a profound humiliation and a violation of their dignity, even though these acts are externally inflicted. Of course, these violations of dignity are tied to social conventions: shaving someone’s hair off is not typically degrading, but it was for women in France shortly after World War II.[2]

However, such degradations are, in most cases, attempts to reduce people to their basic natural existence in culturally significant spheres. Exposing a person’s sexual or digestive functions is an example of this. When people are in a death struggle, they often revert to their basic natural functions. This means that a dying person can appear to be in a state of degradation to an observer. During many wars throughout history, people were lethally injured by imprecise weaponry in such a way that did not kill them immediately. Instead, they had to endure a prolonged and cruel death struggle without any effective medical care, which exposed the natural misery of humans. In view of this, might we not ask ourselves whether using more modern and precise weapons that kill immediately might actually be seen as an improvement with regard to dignity violations in armed conflict? If autonomous weapons systems could be used to make killings more “humane” in this sense, they might even be preferable from the perspective of human dignity violations. Based on our considerations thus far, it is unclear whether the use of lethal autonomous weapons systems constitutes a violation of dignity – at least not one that carries more weight than the use of other weapons systems.

Manipulative action on humans

Another approach may thus be more promising in addressing whether autonomous weapons systems violate human dignity. It starts from a problem inherent to all digital technologies where human-machine interaction is symmetrized, so to speak, or where the machine may even gain the upper hand: a lack of recognition of human existence, the kind of dignified existence we have been discussing. This can easily be illustrated with an example. We can ask ourselves whether we would let a robot remove our appendix if we had evidence that, statistically, the chances of success with robotic operations are significantly better than with operations performed by humans. We would presumably not only not rule out the use of the robot surgeon, but actually consider it preferable. It would therefore be strange if someone were to rule out any use of a robot that can act “autonomously” in relation to changes to humans and their physicality. “Autonomous robotics” is therefore not inherently bad, and certainly not evil, in the sense that it should never be used. Now, consider a therapeutic robot like the Paro robot used to care for dementia patients. It looks and feels like a seal (which constitutes a form of deception) and apparently has a calming effect on dementia patients, boosting their confidence and improving their social and communicative behavior. Is it ethically acceptable to use such a robot? The main ethical problem seems to lie in the fact that a person is being deliberately deceived. The person assumes that the animal is in a good mood and that its trusting behavior is due to the fact that the person has genuinely gained its trust. On the other hand, this deception can be a means of bringing about temporary therapeutic improvement. We might ask ourselves: Would I agree to be temporarily deceived if it improved my condition? We would probably agree. What seems ethically unacceptable, however, is placing a person in a situation where they are permanently deceived, like in Robert Nozick’s example, where people are connected to electrodes that fulfill all their wishes via brain waves, causing them to no longer live in actual reality but in a permanent world of illusion.[3] While we could say that this person feels better with the illusion than without it, it could be argued that these feelings do not ultimately determine whether such a life is truly right.  If you allow yourself to be placed in such an environment, you forfeit your own dignity. Anyone who places others in such an environment strips them of their dignity, at least if this artificial and illusory environment is permanent and final.

This might seem like a lengthy prelude to the core argument about autonomous weapons systems, but the main idea is that our (human) dignity is tied to our relationship with being. Humans are unique in their relationship to the world, as they relate to their surroundings and themselves as being and are the bearers of consciousness. Because I am a conscious being capable of distinguishing between “actual reality” and an “illusory world”,[4] I cannot surrender to an illusory world indefinitely without sacrificing my dignity. This environment includes other people as beings. Recognizing my fellow human being as someone who, like me, has a relationship to being (the very fact that there is something), seems to be the minimum requirement for acknowledging dignity. A person is not a stone or a potato that I can manipulate (or permanently deceive) for my own purposes without violating their dignity. – This is why we find it degrading when we “naturalize” people, that is, when we expose them as mere natural objects and simply ignore their detachment from nature, despite their inherent connection to it.

Even when killing, we demand this minimal recognition, provided the argument presented here is valid. The person doing the killing should at least be aware that they are killing a human being.[5] A final and definitive step is taken here: killing is not a therapy for other purposes which serves the interests of the one being killed. Irrespective of the fact that someone might wish to be killed, it cannot be carried out by an “autonomous” system that cannot recognize the existence of a conscious human being. If a person tired of living were to actually subject themselves to a lethal autonomous system, it would simply be a case of suicide, not a killing by this system.[6] But when a system that itself has no consciousness, and thus no awareness of the fact that another being endowed with a relation to being is being destroyed, puts a person to death, it does so in the same manner as any other manipulative act carried out on any other object in the world.[7] This does indeed appear to violate human dignity, although admittedly this claim requires recourse to another intuition.

This is not about making human dignity dependent on the factual recognition of this dignity, but about someone using a technology that precisely circumvents this recognition. This in itself can be seen as a violation of dignity. For something to be considered the killing of a human, there is a minimum requirement that it is understood as the killing of a human being; this is a requirement that an autonomous weapons system without consciousness cannot meet. For the person killed, however, the killing is a final act, which means the consciousness of the agent of the killing is critical.

The inviolability paradox

The argument probably stands on shaky ground as we can still maintain that whatever happens in the empirical world has no influence on the dignity of the trans-empirical human being (homo noumenon). In a certain sense, this does still appear to be true. But then any human act that has an empirical aspect would never be able to violate dignity, rendering the concept of dignity useless in any discussion on applied ethics. Dignity could never be compromised, which would make the imperative that dignity must not be violated meaningless (unless it pertains to one’s own dignity). Perhaps someone can provide this argument with a more robust foundation, as it currently operates within a kind of hermeneutics of moral intuition. That would be very welcome indeed. However, concerning the campaign for a ban on LAWS, the argument may not be universally plausible enough. It seems to me that the campaign would do well to give weight to other arguments, especially those that highlight the enormous security risks of establishing such weapons systems.[8]

One final remark: If you base the violation of dignity on an empirical circumstance (for example, someone is “reduced to a data point”), you will inevitably encounter the objection that other military systems similarly “reduce individuals to data points”. To maintain consistency, one would need to assert that these applications of weapons technology are also morally objectionable. Those who have already successfully, as it were, employed such systems according to their standards are unlikely to concede to this perspective.

On the other hand, a practical problem arises when considering that empirical criteria must be clearly defined if dignity violations are to be determined by empirical circumstances, as illustrated in the following example: “A death struggle that lasts for up to ten minutes does not constitute a violation of dignity, but one that lasts for an hour or more does – and the period in between is considered a gray area.” Such a view may not be so far removed from our everyday moral understanding, yet it is of limited use when applied to political actions.

Loss of dignity

As we can see, our growing reliance on technology compels us to frame ethical considerations in terms of technical or mathematical criteria. The “technocratic paradigm,” as lamented by Pope Francis in various texts, including Laudato Sí, underscores our tendency to address ethical dilemmas in technical terms.[9] To ground justifications in moral intuitions, as attempted here, and to view the inability to operationalize concepts solely as a shortcoming, risks too swiftly being dismissed as “esoteric”.

As previously suggested, this may not suffice for crafting an international ban treaty or establishing strict binding conditions, particularly considering that other religions and cultures may approach the already elusive concept of dignity from other angles. Beyond that, another consideration, only briefly touched upon in this article, might at least guide us towards handling autonomous weapons systems ourselves as restrictively as possible. Outsourcing decisions regarding killing to algorithmic systems risks undermining our own normative foundation and thus acting without dignity ourselves. This consideration is not invalidated by the assertion that recognition of human existence plays only a marginal role, if any, in conventional wars as well. This (counter-)argument would ultimately render obsolete the concept of Innere Führung (officially translated as “leadership development and civic education”), as it relies on an engagement with and exploration of the concept of dignity.

 

This article is an opening statement at a conference of the Peace Research Institute Frankfurt (PRIF) and the Institute for Theology and Peace on the performance of the dignity argument in relation to international efforts to ban autonomous weapons systems on April 29, 2022 in Frankfurt. The author would like to thank Dr. Niklas Schörnig and Prof. Christopher Daase in particular for their successful cooperation.

The text has already been published in the journal “Militärseelsorge. Dokumentation” in 2022 and slightly revised for this edition. The editors would like to thank you for the permission to publish it again.

 


[1] e.g. Metaphysics of Morals, Ak 6:418.

[2] Cf. Wysling, Andres (2017): Frankreichs geschorene Frauen. Neue Züricher Zeitung, 16.8. https://www.nzz.ch/international/die-befreiung-beginnt-mit-einer-hexenjagd-frankreichs-geschorene-frauen-ld.1305283?reduced=true (all internet references accessed on May 30, 2024.)

[3] Nozick, Robert (1974): Anarchy, State, and Utopia. Oxford, pp. 42-45.

[4] It also implies an abandonment of the distinction between the illusory world and actual reality.

[5] On this point, cf. the somewhat different considerations by Asaro, Peter (2020): Autonomous Weapons and the Ethics of Artificial Intelligence. In: Liao, S. Matthew (ed.): Ethics of Artificial Intelligence. New York, p. 212–236.

[6] Someone might view a fight with an autonomous weapons system as a special life-threatening challenge, in the same way a torero sees a fight with a bull as a unique life-threatening challenge. In this case, human dignity is not being violated, neither by the torero nor the bull. Anyone can seek out a life-threatening challenge without violating human dignity. But part of the problem with killing by an autonomous weapons system is that this risk is not sought, but rather imposed.

[7] Provocatively speaking, forthe machine, killing a person is no different from moving a pile of stones or potatoes.

[8] Cf. Alwardt, Christian and Schörnig, Niklas (2021): A necessary step back? Recovering the security perspective in the debate on lethal autonomy. In: Zeitschrift für Friedens- und Konfliktforschung 10, pp. 295-317. link.springer.com/article/10.1007/s42597-021-00067-z. The following report classifies the security risks of AI applications (including autonomous weapons systems) for international security: Puscas, Ioana (2023): AI and International Security:

Understanding the Risks and Paving the Path for Confidence-Building Measures. UNIDIR. https://unidir.org/wp-content/uploads/2023/10/UNIDIR_AI-international-security_understanding_risks_paving_the_path_for_confidence_building_measures.pdf.

[9] Cf. Koch, Bernhard (2022): Technikethik. In: Merkl Alexander and Schlögl-Flierl, Kerstin (ed.): Moraltheologie kompakt. Grundlagen und aktuelle Herausforderungen. Regensburg, pp. 340–350.

 

Summary

Dr. Bernhard Koch

Dr. Bernhard Koch is Acting Director of the Institute for Theology and Peace in Hamburg and Adjunct Professor of Moral Theology at the university of Freiburg. He studied philosophy, logic and philosophy of science as well as catholic theology in Munich and Vienna and was a lecturer at the University of Education Weingarten, Goethe University Frankfurt, Helmut Schmidt University of the German Armed Forces and Hamburg University. Since 2012, he has held the position of Co-Teacher Ethics at the ICMM Center of Reference for Education on IHL and Ethics, Zurich.

koch@ithf.de

All articles by Bernhard Koch


Download PDF here

All articles in this issue

Advocating for A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead
Catherine Connolly
Autonomous Weapons Systems – Current International Discussions
Andreas Bilgeri
Digital Escalation Potential: How Does AI Operate at the Limits of Reason?
Axel Siegemund
AI for the Armed Forces Does not Need a Special Morality! A brief argument concerning the regulation of autonomous weapons systems
Erny Gillen
Human Dignity and “Autonomous” Robotics: What is the Problem?
Bernhard Koch
Burden of Proof in the Autonomous Weapons Debate – Why Ban Advocates Have Not Met It (Yet)
Maciek Zając
Reliability Standards for (Autonomous) Weapons: The Enduring Relevance of Humans
Nathan Wood
Is There a Right to Use Military Force Below the Threshold of War? Emerging Technologies and the Debate Surrounding jus ad vim
Bernhard Koch
“Meaningful Human Control” and Complex Human-Machine Assemblages – On the Limits of Ethical AI Principles in the Context of Autonomous Weapons Systems
Jens Hälterlein, Jutta Weber
Humanity in War? The Importance of Meaningful Human Control for the Regulation of Autonomous Weapons Systems
Susanne Beck, Schirin Barlag
Meaningful Human Control of Autonomous Systems
Daniel Giffhorn, Reinhard Gerndt
AI in the Age of Distrust
Henning Lahmann

Specials

Ansgar Rieks, Wolfgang Koch ChatGPT