The Geneva Process on AI Principles

States and international organizations are increasingly adopting ethical principles on Artificial Intelligence (AI) for defence and military purposes. However, translating these principles into effective governance frameworks remains a key challenge. What do they mean? How can they be implemented? How do they interrelate with international law? The Geneva Process on AI Principles is an interdisciplinary process uniting leading experts and policymakers to explore and clarify these questions.

The Context

AI is increasingly developed and used for defence and military purposes, ranging from planning and logistics to targeting. These applications raise a series of ethical, operational, and legal questions. The appropriate degree of autonomy of machines, the risk of bias in systems, and the necessity for transparency are examples of unresolved issues regarding the military use of AI.

To tackle these challenges, states and international organizations have started to define ethical principles to guide the design, development, and use of AI for defence and military purposes. Key documents include the US Department of Defense Ethical Principles for Artificial Intelligence, the NATO AI Strategy, the EU Guidelines for Military and Non-Military Use of Artificial Intelligence, and the Guiding Principles adopted by the UN Group of Governmental Experts on LAWS.

Recurring principles throughout these documents include responsibility, equitability, traceability, reliability, governability, and lawfulness. Yet, these ethical principles on military AI remain few, vague, and heterogenous. These initiatives thus reflect an emerging but still fragmented global governance landscape. Consequently, there is a need for a better comprehension of the meaning, implementation, and legal ramifications of the principles as well as how these principles should be operationalised and enforced. The Geneva Process on AI Principles narrows this gap.

The Process

The Geneva Process on AI Principles aims to increase the understanding of the emerging principles on AI for defence and military purposes. This should support states, international organizations, firms, and researchers for better analysis, design, implementation, and international cooperation regarding the ethical and responsible development and use of military AI. Taking a broad approach regarding AI applications, legal branches, and other considerations, the process also aims to support ongoing international and national efforts on the regulation and governance of AI.

To this end, the process explores and clarifies the emerging principles’ meaning, their operationalisation, and their legal implications. The process addresses four analytical and policy-related angles relevant to the development, use, and regulation of AI for defense and military purposes, namely legal, technical, ethical, and military perspectives. It emphasizes the need for coordination across these domains to ensure coherent and effective governance. The process consists of research, expert consultations, and a series of workshops as well as the creation of an international network of experts on military AI and contributions to leading international conferences.

Publications

GCSP partnered with Articles of War to publish a series of blog articles offering several analyses on the nexus between international law and the responsible development, deployment, and use of AI for defence and military purposes:

Beyond this series, GCSP publications contribute to ongoing research and policy discussions;

Expert Workshops

Two workshops were held in 2022 to respectively explore the legal implications and technical challenges of the ethical principles on Military AI. In total, 44 experts from diverse regions and backgrounds (governments, industries, academia and representatives of NATO, the EU, the ICRC) met. The workshops served as platform to connect policy, legal, and technical expertise.

For the legal workshop, experts identified legal touchpoints between AI principles and public international law, international humanitarian law, and international human rights law. In terms of technical challenges, experts explored how the principles can be translated at the various stages of the design, development and use of AI systems by defence forces.

REAIM Summits

Since 2023, the GCSP has participated in the Responsible AI in the Military Domaine (REAIM) Summit. This is the first conference launching an international and multi-stakeholder debate on responsible AI. Contributions from GCSP experts and fellows include:

Through these engagements, GCSP contributes to shaping emerging norms, identifying governance gaps, and fostering convergence across national and institutional approaches to responsible AI.

Policy Advice

  • The GCSP together with the Mission of Switzerland organised a working breakfast at NATO in December 2022 to discuss insights into the current ethical, technical, and legal challenges to the implementation of AI principles, building on observations gained from the GCSP workshops. 

 

  • Pursuant to Resolution 51/22, the Human Rights Council mandated the Advisory Committee to examine the human rights implications of new and emerging technologies in the military domain. The GCSP was invited to offer insights to the expert-body based on the GCSP’s work and reflect upon the national and international policies and strategies tackling responsible AI, specifically from the international law and human rights law standpoint.

 

  • The GCSP delivered a statement during the first session of the 2024 Group of Governmental Experts on emerging technologies in the area of Lethal Autonomous Weapons Systems. This contribution emphasized the importance to clarify human control requirements to ensure compliance of autonomous weapon systems with International Humanitarian Law.

 

  • The GCSP offered a keynote address at the 2nd Autonomous Systems Community of Interest Conference, organised by the European Defence Agency. Insights and reflections on the “European strategic outlook on autonomous systems” were shared to more than 400 participants who explored how autonomy is transforming European defence, from experimentation and operations to innovation at scale.

Experts

Staff
Dr Tobias Vestner
Director of Research and Policy Advice Department & Head of Security and Law
Staff
Mr Simon Cleobury
Head of Arms Control and Disarmament
Staff
Ms Maréva Lietti-Roduit
Department Manager, Research and Policy Advice Department