Skip to main content

Advocating for A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead

In October 2023, Antonio Guterres, United Nations Secretary-General, and Mirjana Spoljaric, President of the International Committee of the Red Cross, issued a landmark joint appeal ‘to address an urgent humanitarian priority’ – autonomous weapons systems. Calling on political leaders ‘to urgently establish new international rules’, Guterres and Spoljaric emphasised that:

‘We must act now to preserve human control over the use of force. Human control must be retained in life and death decisions. The autonomous targeting of humans by machines is a moral line that we must not cross. Machines with the power and discretion to take lives without human involvement should be prohibited by international law.’

Setting an unprecedented timeline, the joint appeal urges the international community ‘to launch negotiations of a new legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems and to conclude such negotiations by 2026.’[1]

Following this appeal, 152 member states of the United Nations General Assembly voted resoundingly in favour of Resolution 78/241, the first ever resolution on autonomous weapons systems at the United Nations General Assembly.[2] Tabled by Austria and co-sponsored by a cross-regional group of 43 states, the resolution stressed ‘the urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems’, and requested that the Secretary-General seek the views of Member States and observer States on ‘ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force’, as well as to invite the views of international and regional organisations, the International Committee of the Red Cross, civil society, the scientific community, and industry on the topic.[3]

The resulting report will be presented to the United Nations General Assembly in autumn 2024. The resolution also places autonomous weapons systems as a specific item on the agenda for the next United Nations General Assembly meeting (where previously such systems have been discussed under the general ‘conventional weapons’ agenda item). The adoption of this resolution represents a notable step forward in international discussions on the issue, ‘lighting a path towards a legal framework to ensure meaningful human control over the use of force.’[4]

Meanwhile, a number of regional and international conferences on the issue have been held around the globe during 2023 and 2024. At the time of writing, international and regional conferences on autonomous weapons have been hosted by Austria, Trinidad and Tobago, Costa Rica, Luxembourg, the Philippines, and Sierra Leone. Declarations arising from these conferences, and resolutions from separate processes include but are not limited to: the Communiqué of the Latin American and the Caribbean Conference on Social and Humanitarian Impact of Autonomous Weapons (aka the Belén Communiqué); the CARICOM Declaration on Autonomous Weapon Systems; the Freetown Communiqué; the Inter-Parliamentary Union resolution on ‘Addressing the social and humanitarian impact of autonomous weapon systems and artificial intelligence’; and the Ibero-American Summit special declaration on the social and humanitarian impact of autonomous weapons systems. The Austrian-hosted Vienna conference on autonomous weapons, ‘Humanity at the Crossroads: Autonomous Weapons and the Challenge of Regulation’, held in April 2024, was attended by 144 states, over 1000 participants, and a strong civil society presence, making it the largest international meeting on the regulation of autonomous weapons systems to be held outside of the United Nations to date.

The appeal of the UN Secretary General and the ICRC president, and the emphatic support for the UNGA resolution on autonomous weapons, along with the numerous international and regional conferences, political declarations and resolutions around the world on the issue of autonomous weapons during 2023 and 2024 demonstrates the widespread concern held by the international community around the threats and challenges posed by these weapons systems. It also underscores the growing consensus among states on the need for new, legally binding international rules.

For ten years, diplomatic discussions on autonomous weapons systems have taken place under the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Emerging Technologies in the area of Lethal Autonomous Weapons Systems (aka the GGE on LAWS), at the United Nations in Geneva. Despite the fact that the majority of states participating in discussions in that forum have made clear their desire to negotiate new, legally binding international rules set out around the so-called ‘two-tier approach’, a small group of highly militarised states have consistently blocked meaningful progress in this forum.[5] Due to the consensus-based nature of discussions at the CCW, in which all states must agree on an outcome, these states have been able to dilute reports and frustrate advances toward the negotiation of new rules on autonomous weapons.

The new three-year mandate of the GGE on LAWS, agreed at the 2023 Meeting of High Contracting Parties to the CCW in November 2023, is weak and without ambition – mandating the GGE ‘to further consider and formulate, by consensus, a set of elements of an instrument, without prejudicing its nature, and other possible measures to address emerging technologies in the area of lethal autonomous weapon systems.’ This is, essentially, ‘a long-winded way of saying that the Group will continue to gather ideas – but it is far from negotiating an actual outcome, let alone agreeing that this outcome should be a legal instrument.’[6]

The numerous regional and international conferences, coupled with the high level of support for the UN General Assembly resolution on autonomous weapons, demonstrate the desire of many states to move discussions on the issue from the CCW to a more inclusive forum, and one that isn’t held back by continued abuse of the CCW’s consensus rule. It also communicates that many states wish to be proactive and responsive to the calls of the UN Secretary General, the President of the ICRC, civil society, and others, including the Holy See, and many in the academic and scientific communities, around the need for a legally binding instrument.

What AWS are, and what a treaty should include

Per the International Committee of the Red Cross, autonomous weapons systems are those systems which: ‘select and apply force to targets without human intervention. After initial activation or launch by a person, an autonomous weapon system self-initiates or triggers a strike in response to information from the environment received through sensors and on the basis of a generalized “target profile”. This means that the user does not choose, or even know, the specific target(s) and the precise timing and/or location of the resulting application(s) of force.’[7]

Given the way in which autonomous weapons function, they are ‘qualitatively different to other weapons systems’, because ‘if such systems are used against people, this entails individuals being sensed, processed and targeted as patterns of data and objects by machines.’[8] The information that an autonomous weapon uses to select and engage a target depends on the types of sensors it uses, and what information those sensors collect. Autonomous weapons use the information that they get from sensors and fit this information against a generalised target profile. If the information that the weapons system gets from the sensors does not fit its target profile, then it does not use force. However, if the information that the weapons system gets from these sensors matches the pre-programmed target profile, the weapons system will then use force against a target. What this means is that an algorithm makes the decision for the machine to use force and to engage a target based on the data that it gets from sensors, instead of a human (such as a soldier) making this decision.[9] Further, and as the ICRC has pointed out, this means that ‘it is the vehicle or the victim that triggers the strike, not the user.’[10]

Given the fundamental moral, ethical, legal, and humanitarian questions that arise from the development, deployment and use of autonomous weapons systems, a new treaty should include a mixture of prohibitions and regulations – i.e., a ‘two-tier approach’. This two-tier approach now has broad support both within the CCW and elsewhere. This is also the approach supported by the International Committee of the Red Cross, and by the Stop Killer Robots campaign. Stop Killer Robots calls for a prohibition on autonomous weapons which cannot be used with meaningful human control, and a prohibition on autonomous weapons which are designed or used to target people. Positive obligations, i.e. regulations, should apply to all other autonomous weapons systems.

The need for meaningful human control

As is noted above, a prohibition on autonomous weapons systems that cannot be used with meaningful human control is necessary because, in the case of autonomous weapons, a human user does not choose the specific target or the precise timing and/or location of an attack. This raises significant challenges for the implementation of international law, particularly the core and fundamental international humanitarian law principles of distinction, proportionality, humanity, and military necessity.

To ensure respect for and adherence to the rules of international law, including international humanitarian law and international human rights law, it is crucial that human control over the critical functions of weapons systems is maintained, particularly regarding target specification and the duration, area, and scale of a system’s functioning. Specific rules are thus required to make human control over a weapons system ‘meaningful’, and not simply a ‘box-ticking’ exercise. These rules should ensure that a weapons system is predictable, understandable, and has restrictions on the temporal and geographic scope of use and the scale of use.

The ICRC makes it clear that ‘unpredictability in AWS raise a fundamental challenge to IHL’, and that sufficient predictability is required ‘for compliance with IHL and for practical military operational reasons.’[11] A system can only be considered ‘predictable’ if the user can have an adequate understanding of the system, and an understanding of the context of use. The users of autonomous weapons systems must be able to understand how the system functions, and what the likely effects of any attack will be in the operational context in which it is to be used. For a user to anticipate the likely effects of an attack in the operational context of use, limits on duration of use, geographical area of use, and the scale of use are necessary. The ICRC underscores that these limits would ‘aim to enable AWS users to have the necessary situational awareness to anticipate the effects of an attack and be reasonably certain upon launching the attack that it will comply with IHL. These limits also reduce the risk that circumstances may change during an attack and facilitate supervision during the operation of the AWS.’[12]

These rules – on predictability, understandability, and scope and scale of use – operationalise the concept of ‘meaningful human control.’ This control is context-dependent; the ICRC and the Stockholm International Peace Research Institute (SIPRI) have noted that the measures needed to ensure meaningful human control ‘will vary according to the specific context. Where one type of control measure is inadequate, insufficient or challenging to implement, the other types may rise in prominence.’[13]

The Belén Communiqué, the CARICOM Declaration, and the Freetown Communiqué all highlight the central importance of meaningful human control over autonomous weapons systems. The Belén Communiqué states that ‘It is paramount to maintain meaningful human control to prevent further dehumanization of warfare, as well as to ensure individual accountability and state responsibility’;[14] the CARICOM Declaration underlines that the CARICOM states resolve ‘to support the indispensability of meaningful human control over the use of force and thereby encourage the pursuit of an international legally binding instrument which incorporates prohibitions and regulations on AWS’;[15] the Freetown Communiqué notes ECOWAS member states’ concerns ‘about the potential use of autonomous weapons systems as deadly force against targets without the meaningful human control that is critical for upholding ethical, legal, and humanitarian obligations.’[16] The Chair’s Summary of the Vienna conference underlines that ‘systems that cannot be adequately understood or limited to a specific context, cannot be subject to human control and so would not be compatible with legal use and accountability’, and that ‘autonomous weapons systems that promise the advantage of speed may not allow for meaningful human control, and risk destabilizing international security.’[17]

As the delegation of Switzerland underlined at the first session of the 2023 CCW GGE on LAWS meeting, discussions among states are now:

‘...shifting from terminology to concrete and operationalizable concepts of control or involvement. What stands in the centre is really the idea that humans must take measures at different stages in the life-cycle of a weapon, including in the engagement of autonomous weapon systems, and understand their functioning in appropriate ways.’[18]

A prohibition on targeting people

The targeting of people with autonomous weapons systems is ‘a most pressing ethical issue.’[19] Even in the case where an autonomous weapons system can be used with meaningful human control, if that system is designed or used to target humans, it should be prohibited. The reasons for this are manifold, not least among them being that autonomous weapons trouble the most basic ethical principles of humanity, ‘in effect substituting human decisions about life and death with sensor, software and machine processes.’[20] As highlighted at the beginning of this essay, the United Nations Secretary General and the President of the ICRC stressed in their joint appeal that ‘the autonomous targeting of humans by machines is a moral line that we must not cross.’[21]

The use of proxy indicators such as weight, heat-shape, movement or specific biometrics to classify and target people is problematic not only morally and ethically, but legally. Under the rules of international humanitarian law, the targeting of people is subject to a range of nuanced considerations and necessitates the application of human judgment, which a machine cannot be trusted to do. As the ICRC sets out:

‘To assess the lawfulness of a targeting decision, one must first determine whether IHL applies to the situation at hand [...] Beyond deciding if IHL applies or not, one must also assess which IHL rules will apply to the facts at hand, depending on whether such facts relate to the conduct of hostilities (historically designated as “Hague Law”) or to the treatment of persons in the power of a party to the conflict (historically designated as “Geneva Law”). The rules on the conduct of hostilities apply to persons or objects not in the hands of the attacking party to the conflict. On the contrary, rules on the treatment of persons apply only to individuals in the power of a party (such as internees, prisoners of war, detainees, the wounded and sick, inhabitants of occupied territory) and entail that such persons are hors de combat and thus protected from attack at all times.’[22]

This is only the first order around the legality of targeting that needs to be considered; after this, the ‘cardinal rules’ of distinction, proportionality, and precaution in attack must be applied.

Claims that autonomous weapons could distinguish between civilians and combatants, between combatants and those who are hors de combat or attempting to surrender, or between civilians and civilians directly participating in hostilities on the basis of sensor-based target profiles and the processing and classification of algorithms implicates the core international humanitarian law principle of distinction, as well as the other cardinal rules of IHL mentioned above. Further, that is to say nothing of the considerable challenges which such weapons systems raise with regard to international human rights law, in particular the right to life.

A number of states participating in the GGE on LAWS discussions have proposed prohibitions on systems designed or used to target humans and expressed their concerns regarding the ethical issues raised by the use of autonomous weapons systems for that purpose, including issues such as bias, discrimination and the violation of human dignity. 

The targeting of people based on sensor data amounts to reducing people to data points, violates human dignity, and is a grave example of dehumanisation. A prohibition is required for exactly the reasons described by the ICRC President: ‘the ethical risks are too stark, and the potential for harm to protected persons is too high.’[23]

Risks to international security and the increasing adoption of military AI and autonomy in conflict

Aside from the risks outlined above, autonomous weapons also endanger international peace and security. Widespread proliferation of autonomous weapons systems would likely further lower the threshold for the use of force. Autonomous weapons present serious challenges to responsibility and accountability for violations of international law. And, as the President of the ICRC has pointed out, ‘the technology to develop these kinds of weapons exists and is accessible to state and non-state actors.’[24] Some states are already investing heavily in the military application of artificial intelligence and autonomy, encouraging an AI ‘arms race’. The  potential for proliferation of systems with autonomous capabilities by non-state actors is also a real and growing threat.

For now, it is states themselves who ‘lead the field’ in the slow creep toward delegating life and death decisions to machines. Increasingly we are seeing reports that indicate that weapons systems with autonomous functions and AI decision support and target recommendation systems are being used in conflict, from the use of loitering munitions with image recognition capabilities in Ukraine, to Israel’s use of AI target recommendation systems in Gaza, raising significant concerns around respect for international law and the protection of civilian life. While at the time of writing there are no verified reports of the use of autonomous weapons systems which use sensor processing to both choose a target and carry out an attack at a time and in an area without human approval, precursor systems are being developed and deployed by numerous states. As the President of the ICRC noted in her speech at the April 2024 Vienna conference on autonomous weapons, reports are ‘difficult to verify definitively’, they ‘indicate disturbing trends towards increasingly complex technology and expanding operational parameters.’[25]

The increasing adoption of autonomy and AI by the military and defence sector underscores the urgency of the need for the negotiation of new rules. In an area in which life and death decisions are in question, industry cannot be left to self-regulate, and nor can more powerful and militarised states be left to distract the international community with non-binding principles and declarations. It is unacceptable that some states prefer to focus on potential and unproven military ‘benefits’ from such systems to the detriment of safeguarding human rights and human life, and ensuring respect for international law. Already existing and highly concerning real-world pitfalls and dangers from autonomy and AI are now well-known and well-documented across both the civil and military spheres, and have been proven to disproportionately affect the most marginalised – states must work together to protect against such pitfalls and dangers when it comes to decisions of life and death in conflict.[26]


Much of the international community already recognises that we are at a crucial moment in history, and the momentum towards new international law is clear. As Alexander Schallenberg, Austrian Federal Minister for European and International Affairs, highlighted in his opening remarks at the Vienna conference:

‘This is the ‘Oppenheimer moment’ of our generation! We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control.’[27]

The plethora of international and regional conferences, the UNGA resolution, and initiatives such as the Inter Parliamentary Union resolution on autonomous weapons systems demonstrate that a growing majority of states have a significant appetite for new international law on autonomous weapons. A new treaty would display a shared and renewed commitment to international law and disarmament. States should take confidence in the evident widespread support for new international law, and pursue all available approaches to launch negotiations as soon as possible to create clear and binding prohibitions and regulations on autonomous weapons systems.


[1] International Committee of the Red Cross (ICRC) (2023): Joint call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States to establish new prohibitions and restrictions on Autonomous Weapon Systems: (all internet sources accessed May 11, 2024).

[2] See: 164 states voted in favour of the resolution at the UNGA First Committee vote.

[3] For the full text of the resolution, see: United Nations General Assembly (2023): A/C.1/78/L.56. Co-sponsoring states include: Antigua and Barbuda, Austria, Bahamas, Barbados, Belgium, Belize, Bulgaria, Cabo Verde, Costa Rica, Croatia, Cyprus, Czechia, Denmark, Dominican Republic, Ecuador, Fiji, Germany, Guatemala, Honduras, Hungary, Iceland, Ireland, Italy, Kazakhstan, Kiribati, Liechtenstein, Luxembourg, Maldives, Malta, Mexico, Montenegro, Netherlands, New Zealand, North Macedonia, Norway, Philippines, Republic of Moldova, San Marino, Sierra Leone, Slovenia, Sri Lanka, Switzerland, Trinidad and Tobago.

[4] See: Stop Killer Robots (2023): 164 states vote against the machine at the UN General Assembly.

[5] For more on state positions with regard to the negotiation of a legally binding instrument on autonomous weapons systems, see: Automated Decision Research, State Positions Monitor. Available at:

[6] Stop Killer Robots (2023): 2023 CCW falls short of the UN Secretary-General and ICRC calls for a legal instrument by 2026.

[7] ICRC (2021): ICRC position on autonomous weapons systems.

[8] Minor, Elizabeth (2023): Laws for LAWS: Towards a treaty to regulate lethal autonomous weapons.

[9] For more on sensor-based targeting systems and target profiles, see: Article 36 (2021): Sensor-based targeting systems: an option for regulation. and Article 36 (2019): Target profiles.

[10] ICRC (2022): What you need to know about autonomous weapons.

[11] ICRC (2021): ICRC position on autonomous weapon systems: Background paper.

[12] ICRC (2021), see endnote 11.

[13] SIPRI and ICRC (2020): Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control.

[14] Costa Rican Ministry of Foreign Affairs (2023): Communiqué of the Latin American and the Caribbean conference of social and humanitarian impact of autonomous weapons.

[15] Government of the Republic of Trinidad and Tobago (2023): CARICOM Declaration on Autonomous Weapons Systems.

[16] ECOWAS (2024): Communiqué of the Regional Conference on the peace and security aspects of autonomous weapons systems: an ECOWAS perspective.

[17] Republic of Austria Federal Ministry of European and International Affairs (2024): Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation: Chair’s Summary.

[18] Statement by Switzerland, CCW GGE on LAWS, 08 March 2023.

[19] Republic of Austria Federal Ministry of European and International Affairs (2024), see endnote 17.

[20] International Committee of the Red Cross (2021), see endnote 11.

[21] International Committee of the Red Cross (2023), see endnote 1.

[22] ICRC: Targeting under International Humanitarian Law.

[23] ICRC (2024): Speech given by Mirjana Spoljaric, President of the International Committee of the Red Cross, 2024 Vienna Conference on Autonomous Weapon Systems.

[24] ICRC (2024), see endnote 23.

[25] ICRC (2024), see endnote 23. For more on the increasing complexity of trends related to autonomy in weapons systems, see: PAX (2023): Increasing complexity: Legal and moral implications of trends in autonomy in weapons systems.

[26] See for example: Automated Decision Research (2022): Autonomous weapons and digital dehumanisation.; Stop Killer Robots, Digital Dehumanisation.; Stop Killer Robots: Frequently Asked Questions.

[27] Republic of Austria Federal Ministry of European and International Affairs (2024): Opening Statement, International Conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation”, Vienna, 29 April.


Catherine Connolly

Dr Catherine Connolly is the Manager of Automated Decision Research, the monitoring and research team of Stop Killer Robots. She holds a PhD in International Law & Security Studies and a BA in International Relations from Dublin City University, Ireland, and an MA in War Studies from King’s College London, England. Prior to joining the Stop Killer Robots team in 2021, she worked in the School of Law and Government at Dublin City University, and was an Irish Research Council Government of Ireland Postdoctoral Fellow.

Download PDF here

All articles in this issue

Advocating for A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead
Catherine Connolly
Autonomous Weapons Systems – Current International Discussions
Andreas Bilgeri
Digital Escalation Potential: How Does AI Operate at the Limits of Reason?
Axel Siegemund
AI for the Armed Forces Does not Need a Special Morality! A brief argument concerning the regulation of autonomous weapons systems
Erny Gillen
Human Dignity and “Autonomous” Robotics: What is the Problem?
Bernhard Koch
Burden of Proof in the Autonomous Weapons Debate – Why Ban Advocates Have Not Met It (Yet)
Maciek Zając
Is There a Right to Use Military Force Below the Threshold of War? Emerging Technologies and the Debate Surrounding jus ad vim
Bernhard Koch
“Meaningful Human Control” and Complex Human-Machine Assemblages – On the Limits of Ethical AI Principles in the Context of Autonomous Weapons Systems
Jens Hälterlein, Jutta Weber
Humanity in War? The Importance of Meaningful Human Control for the Regulation of Autonomous Weapons Systems
Susanne Beck, Schirin Barlag
Meaningful Human Control of Autonomous Systems
Daniel Giffhorn, Reinhard Gerndt
AI in the Age of Distrust
Henning Lahmann


Ansgar Rieks, Wolfgang Koch ChatGPT