Skip to main content

Autonomous Weapons Systems – Current International Discussions

On 29 and 30 April the Austrian Foreign Ministry welcomed representatives of 144 States and around 1000 participants from States, international and regional organisations, tech sector and industry, academia and civil society, to a conference on the issue of Autonomous Weapons Systems. The title of the conference taking place in the Hofburg, the former imperial palace, was „Humanity at the Crossroads. The challenge of regulation of autonomous weapons system”. Foreign Ministers and State Secretaries from all regions, the ICRC President, the UN High Representative for Disarmament Affairs, tech and civil society leaders as well as high-ranking international experts shared their views on the challenges that Autonomous Weapons Systems (AWS) raise from a legal, ethical and security perspective and the way forward to address them. The Austrian Minister for Foreign Affairs, Alexander Schallenberg, referred to an “Oppenheimer moment” expressing the urgency of establishing legally binding rules on AWS and at the same time appealing to the responsibility of political leaders.

This conference has been the culmination of a series of regional conferences in Costa Rica, Luxembourg, Trinidad and Tobago, the Philippines and Sierra Leone. In all parts of the world, States had come together and expressed not only their concerns about AWS but also their determination to address them.

At the Austrian conference a chair’s summary was read out by Amb. Alexander Kmentt, Disarmament Director of the Austrian MFA.[1] This document summarized the results of the expert discussions and is dedicated to issues such as human control, accountability, compatibility with international law, the considerable ethical concerns and the risks to peace and security.

This summary will be one of hopefully many contributions which will be submitted to the Secretary-General of the United Nations in May 2024. Through Resolution 78/241 on “Lethal Autonomous Weapons” adopted by the General Assembly in December 2023 the Secretary-General is mandated to seek the views of States on this issue and to present a report back to the General Assembly in 2024. The resolution was presented by Austria together with a cross-regional group of States and had gathered the support of 164 States during a vote in the First Committee with only 5 States voting against it. Already in 2022 a joint statement supported by 70 States was delivered by Austria during the First Committee of the General Assembly.

It has also to be mentioned that the United Nations Secretary-General has been calling for action for many years with regard to AWS. He considers that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.[2] In 2023 the UN Secretary-General and the President of the ICRC in a joint appeal underlined the urgency for States to negotiate new, binding internal law with prohibitions and restrictions by 2026.[3] As in many other disarmament issues with a strong humanitarian concern civil society plays a crucial and active role in the discussions and in mobilising States to take proactive positions.

The current situation

Quoting an “Oppenheimer moment” also refers to the current situation in which there is still a certain possibility for pre-emptive regulation. Such a regulation including prohibitive elements would have a profound impact on the future development and use of autonomous weapons in the battlefield and provide the international community with clarity and guardrails. At the same time, we know from a number of reports that such weapons or at least their preliminary stages do already exist and are under development in different parts of the world. It is also very clear that the interest in such weapons and their development is fuelled by ongoing conflicts and geo-political tensions. The window for an international regulation setting global standards and guardrails for the future technological development in this field is closing rapidly.

Not seizing this opportunity (and learning the lessons of history) could have grave consequences. There is a certain risk of an arms race unfolding in this field, with many unclarities how existing international law applies. Rapid technological progress would soon provide us with a patchwork of policies and practices and a fait accompli by those states who are the most advanced in this field.

This is all the more worrying as the use of AI in the military will not be limited to a few powerful states but adopted worldwide. It is to be expected that the proliferation of these weapons’ technology will be fast and impactful. The related risks concern all States and all parts of society and have disproportionate effects on those most vulnerable.

The international community has recognized these issues already in 2014 when they began their deliberations on “Lethal Autonomous Weapons Systems” within the Convention on Conventional Weapons (CCW). In 2016 a Group of Governmental Experts (GGE LAWS) was established, which continues its work up to this day. During this time important work has been done, which informed the positions and views of States on this issue and which also led to the adoption of the so-called “11 Guiding Principles”.

The final step to adopt a protocol under the CCW, which is a framework convention “on prohibitions or restrictions on the use of certain conventions which may be deemed to be excessively injurious or to have indiscriminate effects” (actual title of the convention), is still outstanding. A main inhibitor of its work is the principle of consensus which allows a small number of States to prevent progress and further substantive outcomes. 

Many States support a so-called two-tier approach to address the identified challenges, which consists of a combination of prohibitions and regulations. Further details on how these two parts are designed would need to be subject to international negotiations, but these seem to be a promising way forward.

But they also need to take into account the urgency of the matter. As Austrian Foreign Minister Schallenberg pointed out during the conference in Vienna, “Technology is moving ahead with racing speed, while politics is lagging behind.” States need to overcome their differences on the way forward and show the political leadership and foresight this challenge demands of us.

The hope and expectation of Austria and many other States is that the new report by the Secretary-General, to be issued this summer, as well as the attention gathered by the conference in Vienna (and the many conferences predating it) will create new momentum for the international community to move forward with greater urgency and clear purpose. One of its many advantages is that it will include the views of all UN Member and Observer States which go beyond the 126 High Contracting Parties of the CCW.

The interest in this topic has further been testified by a number of different workstreams which all address the military use of Artificial Intelligence, such as the conferences on “Responsible Artificial Intelligence in the Military Domain” in The Hague in February 2023 followed up in Seoul in September 2024. Another initiative is the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the United States which adds a dimension of practical implementation to the mix.

Artificial Intelligence and International Law

One of the main concerns is that the current legal body is addressed to people not machines, in the form of individual or state responsibility. Accountability for actions and consequences cannot be transferred to algorithms. Establishing and attributing responsibility is an absolute necessity for maintaining the soundness of the international or domestic legal systems.

But how can a person be accountable for the actions of an algorithm it cannot sufficiently understand or which actions cannot be predicted or explained (“black box problem”)? It is therefore necessary to establish rules, regulations and operative limitations that establish a framework in which the actions of an AWS can be sufficiently understood and meaningfully controlled at any time.

At the same time there is a range of legal obligations by States which are increasingly challenging to maintain through phenomena like algorithmic bias or the challenges raised by machine-learning (How to conduct legal reviews under Art. 36 of Additional Protocol I of the 1949 Geneva Conventions on a developing system?).

The key to many questions lies in human involvement and how it is maintained and ensured in design, development and use. Ensuring meaningful human control, also in the form of positive obligations, is a way to resolve the core issues.

The ethical dimension

Ethical considerations are among the main concerns for many States and also sustain the need for legally binding rules including prohibitions and regulations for AWS. They derive from a profound concern that the delegation of life and death decisions to machines would cross a moral line, but also from a tradition in international humanitarian law. This body of law has always built its rules on ethical considerations and has a strong foundation with the Martens Clause[4]. Human dignity is a central element of international humanitarian and human rights law and also part of key international treaties and a large number of constitutions around the globe. The violation of human dignity through the use of weapons in a way that is dehumanizing or objectifying is a widely held concern. A preventive regulation of autonomous weapons systems has to take these elements into account and must be built, inter alia, around ethical and moral principles.

What is also noteworthy is the growing number of States which introduce ethical policies, frameworks or bodies with regard to the civilian as well as military uses of AI. This demonstrates that in particular democracies are very aware of the problems that may arise by the use of AWS as well as their societal acceptance. Already in 2021 NATO has published its Principles of Responsible Use of AI which bear a strong resemblance to the UNESCO Recommendation on the Ethics of AI from the same year. It would be only logical to lift standards and norms which States with advanced weaponry apply to themselves also to a global level.

The way forward we need

Autonomous Weapons Systems raise a number of questions and concerns, which do relate to ethical and security issues as well as to the application of existing elements of international law. Specifying how they apply and addressing legal gaps and unclarities is the duty of the international community.

Furthermore, this issue touches upon the underlying question of how we as humans understand our role and significance in the conduct of war and in the use of force. We must not accept an utilitarian view on this issue. One of the worst mistakes would be to accept this significant technological change in the means and methods of warfare as a given and not as something that is determined and shaped by our action. Considering the consequences of inaction, it is indeed the Oppenheimer moment of our generation.

AWS could soon proliferate around the globe. It will be a major security challenge for all states and an ethical challenge for all of society. It is therefore a question to be addressed in the most comprehensive and inclusive manner possible, including civil society, academia and industry.

It is imperative to overcome differences and to match the strong concerns with real action. The hesitancy of a few States should not stop the rest of us from moving forward and creating a legally binding norm. The impact of soft law will not be sufficient to prevent States from pushing the boundaries further. And even a difficult international situation does not absolve us from the political responsibility to address the challenges of AWS. The time to act on this is now.


[1] Republic of Austria Federal Ministry of European and International Affairs (2024), ): Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation: Chair’s Summary, 30 March. (all internet sources accessed May 13, 2024).

[2] United Nations (2019): Machines Capable of Taking Lives without Human Involvement Are Unacceptable, Secretary-General Tells Experts on Autonomous Weapons Systems, 25 March.

[3] International Committee of the Red Cross (ICRC) (2023), ): ‘Joint call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States to establish new prohibitions and restrictions on Autonomous Weapon Systems’5 October,

[4] The Martens Clause is a severability clause in various legal documents pertaining to the Law of Armed Conflict. It was included for the first time into the preamble of the 1899 Hague Convention (II): “Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.” Cf. Ticehurst, Rupert (1997): The Martens Clause and the Laws of Armed Conflict.

Andreas Bilgeri

Andreas Bilgeri is a Counsellor and disarmament expert at the Permanent Mission of Austria in Geneva. He has multilateral experience from the Organization for Security and Co-operation in Europe, the Council of Europe and the UN. During his previous posting in Strasbourg he was involved in the Council of Europe's work towards the regulation of Artificial Intelligence with regard to Human Rights, Democracy and Rule of Law. In Geneva his work focusses on emerging technologies in the military domain. He regularly participates in the discussions of the Group of Governmental Experts on Lethal Autonomous Weapons Systems and was part of the team behind the successful adoption of a Resolution on Lethal Autonomous Weapons Systems by the General Assembly in 2023.

Download PDF here

All articles in this issue

Advocating for A Legally Binding Instrument on Autonomous Weapons Systems: Which Way Ahead
Catherine Connolly
Autonomous Weapons Systems – Current International Discussions
Andreas Bilgeri
Digital Escalation Potential: How Does AI Operate at the Limits of Reason?
Axel Siegemund
AI for the Armed Forces Does not Need a Special Morality! A brief argument concerning the regulation of autonomous weapons systems
Erny Gillen
Human Dignity and “Autonomous” Robotics: What is the Problem?
Bernhard Koch
Burden of Proof in the Autonomous Weapons Debate – Why Ban Advocates Have Not Met It (Yet)
Maciek Zając
Is There a Right to Use Military Force Below the Threshold of War? Emerging Technologies and the Debate Surrounding jus ad vim
Bernhard Koch
“Meaningful Human Control” and Complex Human-Machine Assemblages – On the Limits of Ethical AI Principles in the Context of Autonomous Weapons Systems
Jens Hälterlein, Jutta Weber
Humanity in War? The Importance of Meaningful Human Control for the Regulation of Autonomous Weapons Systems
Susanne Beck, Schirin Barlag
Meaningful Human Control of Autonomous Systems
Daniel Giffhorn, Reinhard Gerndt
AI in the Age of Distrust
Henning Lahmann


Ansgar Rieks, Wolfgang Koch ChatGPT