The Geneva Process on AI Principles
States and international organizations are increasingly adopting ethical principles on Artificial Intelligence (AI) for defence and military purposes. What do they mean? How can they be implemented? How do they interrelate with international law? The Geneva Process on AI Principles is an interdisciplinary process uniting leading experts and policymakers to explore and clarify these questions.
The Context
AI is increasingly developed and used for defence and military purposes, ranging from planning and logistics to targeting. These applications raise a series of ethical, operational, and legal questions. The appropriate degree of autonomy of machines, the risk of bias in systems, and the necessity for transparency are examples of unresolved issues regarding the military use of AI.
To tackle these challenges, states and international organizations have started to define ethical principles to guide the design, development, and use of AI for defence and military purposes. Key documents include the US Department of Defense Ethical Principles for Artificial Intelligence, the NATO AI Strategy, the EU Guidelines for Military and Non-Military Use of Artificial Intelligence, and the Guiding Principles adopted by the UN Group of Governmental Experts on LAWS.
Recurring principles throughout these documents include responsibility, equitability, traceability, reliability, governability, and lawfulness. Yet, these ethical principles on military AI remain few, vague, and heterogenous. Consequently, there is a need for a better comprehension of the meaning, implementation, and legal ramifications of the principles. The Geneva Process on AI Principles narrows this gap.
The Process
The Geneva Process on AI Principles aims to increase the understanding of the emerging principles on AI for defence and military purposes. This should support states, international organizations, firms, and researchers for better analysis, design, implementation, and international cooperation regarding the ethical and responsible development and use of military AI. Taking a broad approach regarding AI applications, legal branches, and other considerations, the process also aims to support ongoing international and national efforts on the regulation of AI.
To this end, the process explores and clarifies the emerging principles’ meaning, their operationalisation, and their legal implications. The process addresses four analytical and policy-related angles relevant to the development, use, and regulation of AI for defense and military purposes, namely legal, technical, ethical, and military perspectives. The process consists of research, expert consultations, and a series of workshops as well as the creation of an international network of experts on military AI.
Publications
GCSP partnered with Articles of War to publish a series of blog articles offering several analyses on the nexus between international law and the responsible development, deployment, and use of AI for defence and military purposes:
- Responsible AI Symposium Introduction by Tobias Vestner and Sean Watts
- The Nexus Between Responsible Military AI and International Law by Tobias Vestner
- Translating AI Ethical Principles Into Practice: the U.S. DOD Approach to Responsible AI by Merel Ekelhof
- Implications of Emergent Behavior for Ethical AI Principles for Defense by Daniel Trusilo
- The Legal Implications of Bias Mitigation in AI Systems by Juliette François-Blouin
- The AI Ethics Principle of Responsibility and LOAC by Chris Jenks
- Responsible AI and Legal Review of Weapons by Michael W. Meier
- Prioritizing Humanitarian AI as Part of “Responsible AI” by Daphné Richemond-Barak and Larry Lewis
- Rules of Engagement as a Regulatory Framework for Military Artificial Intelligence by Tobias Vestner
In the context of the first Responsible AI in the Military Domain (REAIM) Summit in The Hague in 2023, the GCSP team wrote the analysis Globalizing Responsible AI in the Military Domain by the REAIM Summit in Just Security (by Tobias Vestner and Juliette François-Blouin).
During the second session of 2023 of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems at the UN Office in Geneva, the Research Society of International Law interviewed Nicolò Borgesano on the legal, technical and ethical challenges posed by the military use of AI systems.
Events
Legal Workshop
The GCSP held a workshop on the Legal Implications of the Ethical Principles on Military AI on the 13th and 14th of June 2022. Nineteen experts from Europe, Oceania, and the Americas, including Chris Jenks, Daniel Trusilo, Nehal Bhuta, Netta Goussac, Theodore Christakis, Liisa Janssens, as well as representatives from NATO, the EU, the ICRC, and several states, sought to identify legal touchpoints between AI principles and public international law, international humanitarian law, and international human rights law.

Technical Workshop
The GCSP held a second workshop on the Technical Challenges of the Ethical Principles on Military AI on the 29th and 30th of November 2022. The workshop brought together technical experts from NATO, governments, industry, and academia, including Wolfgang Koch, Alka Patel, Ariel Conn, Christine Boshuijzen-van Burken, and Florian Keisinger. Through various use cases, the twenty-five experts delved into how the principles can be translated at the various stages of the design, development and use of AI systems by defence forces.

Working Breakfast at NATO
The GCSP together with the Mission of Switzerland organised a working breakfast at NATO on 15 December 2022 to discuss insights into the current ethical, technical, and legal challenges to the implementation of AI principles, building on observations gained from the GCSP workshops. Contributions were made by Ambassador Thomas Greminger, GCSP Director, Ambassador Philippe Brandt, Head of the Mission of Switzerland to NATO, James Appathurai, Deputy Assistant Secretary-General for Emerging Security Challenges Division at NATO, Dirk Pulkowski, Deputy Legal Advisor of the NATO Office of Legal Affairs, and Bartjan Wetger, Deputy Permanent Representative of the Kingdom of the Netherlands to NATO.

REAIM Summit 2023
A GCSP delegation participated in the Responsible AI in the Military Domain (REAIM) Summit on 15 and 16 February 2023. Hosted by the Netherlands together with South Korea, it was the first conference to launch an international and multi-stakeholder debate on responsible AI. The GCSP organised a panel discussion on swarming and the future of warfare, with exchanges between Jean-Marc Rickli, Tobias Vestner, Sandra Scott-Hayward, Ricardo Chavarriaga, Zachary Kallenborn, Lydia Kostopoulos, and Anja Kaspersen.

Human Rights Council Advisory Committee
Pursuant to Resolution 51/22, the Human Rights Council mandated the Advisory Committee to examine the human rights implications of new and emerging technologies in the military domain. Amongst other experts, Tobias Vestner was invited to offer his insights to the expert-body based on the GCSP’s current work. Tobias Vestner reflected upon the current national and international policies and strategies currently tackling responsible AI, specifically from the international law and human rights law standpoint.

REAIM Summit 2024
A GCSP delegation participated in the Summit on Responsible AI in the Military Domain (REAIM) in Seoul in September 2024, to discuss the responsible development, deployment, and use of artificial intelligence (AI) for defence and military purposes. During this summit, the GCSP and the George C. Marshall Center for Security Studies (GCMC) jointly organized a workshop on the security and military consequences of AI convergence with other emerging technologies such as synthetic biology and neurotechnologies. GCSP’s polymath fellows addressed the topic of AI convergence during a panel discussion. The team also wrote a paper in preparation of the summit.

The GCSP delivered a statement during the first session of the 2024 Group of Governmental Experts on emerging technologies in the area of Lethal Autonomous Weapons Systems. This contribution emphasized the importance to clarify human control requirements to ensure compliance of autonomous weapon systems with International Humanitarian Law
