“Meaningful Human Control” and Complex Human-Machine Assemblages – On the Limits of Ethical AI Principles in the Context of Autonomous Weapons Systems
The use of the term “autonomy” in connection with systems that can identify and engage targets without almost any human intervention is problematic. The imagined advantages of such “autonomously” acting weapon systems (AWS), such as increased speed and precision, are contrasted with the idea of AWS critics that human autonomy in dealing with these constructs can be ensured through meaningful human control.
Feminist science and technology studies have since refuted this understanding. Technologies in general, and even more so for AI-based machines controlled by learning algorithms, are to be understood as highly dynamic human-machine assemblages that constantly “adjust” to each other in the interaction process and generate socio-material agency as a whole. The complexity of such structures, which are anything but “value-neutral” and transparent in their actions and make it difficult to attribute responsibility, has already become apparent in the practice of targeted killings in the so-called drone war. Instead of analyzing the problem, interested parties are now propagating the guiding principle of explainable and responsible AI.
This also applies to the highly automated European multi-national Future Combat Air System (FCAS) project. How human control and responsibility are to be ensured here can be illustrated by looking at the so-called Ethical AI Demonstrator. The analysis of a screenshot of the human-machine interface with a corresponding scenario illustrates key problems – from the trade-off between human responsibility and the effectiveness of the AWS to the lack of traceability of system performance and human bias towards system recommendations (“automation bias”). Without forgoing the acceleration benefits of automation, both the attribution of responsibility and the entire construct of effective human control do not appear feasible.
Full article