States and international organizations are increasingly adopting ethical principles on Artificial Intelligence (AI) for defence and military purposes. What do they mean? How can they be implemented? How do they interrelate with international law? The Geneva Process on AI Principles is an interdisciplinary process uniting leading experts and policymakers to explore and clarify these questions.
AI is increasingly developed and used for defence and military purposes, ranging from planning and logistics to targeting. These applications raise a series of ethical, operational, and legal questions. The appropriate degree of autonomy of machines, the risk of bias in systems, and the necessity for transparency are examples of unresolved issues regarding the military use of AI.
To tackle these challenges, states and international organizations have started to define ethical principles to guide the design, development, and use of AI for defence and military purposes. Key documents include the US Department of Defense Ethical Principles for Artificial Intelligence, the NATO AI Strategy, the EU Guidelines for Military and Non-Military Use of Artificial Intelligence, and the Guiding Principles adopted by the UN Group of Governmental Experts on LAWS.
Recurring principles throughout these documents include responsibility, equitability, traceability, reliability, governability, and lawfulness. Yet, these ethical principles on military AI remain few, vague, and heterogenous. Consequently, there is a need for a better comprehension of the meaning, implementation, and legal ramifications of the principles. The Geneva Process on AI Principles narrows this gap.
The Geneva Process on AI Principles aims to increase the understanding of the emerging principles on AI for defence and military purposes. This should support states, international organizations, firms, and researchers for better analysis, design, implementation, and international cooperation regarding the ethical and responsible development and use of military AI. Taking a broad approach regarding AI applications, legal branches, and other considerations, the process also aims to support ongoing international and national efforts on the regulation of AI.
To this end, the process explores and clarifies the emerging principles’ meaning, their operationalisation, and their legal implications. The process addresses four analytical and policy-related angles relevant to the development, use, and regulation of AI for defense and military purposes, namely legal, technical, ethical, and military perspectives. The process consists of research, expert consultations, and a series of workshops as well as the creation of an international network of experts on military AI.
A Legal Workshop
The GCSP held a workshop on the Legal Implications of the Ethical Principles on Military AI on the 13th and 14th of June 2022. Nineteen experts from Europe, Oceania, and the Americas, including Chris Jenks, Daniel Trusilo, Nehal Bhuta, Netta Goussac, Theodore Christakis, Liisa Janssens, as well as representatives from NATO, the EU, the ICRC, and several states, sought to identify legal touchpoints between AI principles and public international law, international humanitarian law, and international human rights law.
The Project Team
Tobias Vestner, Head of Research and Policy Advice Department and Head of Security and Law Programme
Tobias Vestner leads GCSP’s Research and Policy Advice Department as well as the Security and Law Programme. He oversees and manages GCSP’s analysis and advice activities as well as researches and teaches on the intersection between security policy and international law. Tobias Vestner regularly advises governments, international organizations, and private firms on global security and legal issues. He has published several books and articles as well as provided insights to various media outlets, including the U.S. National Public Radio, NBC News, Neue Zürcher Zeitung, and RTS Geopolitis. Read the full bio here
Juliette François-Blouin, Programme Officer, Security and Law Programme
Juliette François-Blouin is a Programme Officer within the Security and Law Programme. Her current work focuses on the legal implications of new technologies and the regulation of the use of artificial intelligence by armed forces. She also teaches on international security law. Prior to GCSP, she worked as political and economic analyst for the U.S. Consulate in Montreal and contributed to the International Clinic for the Defense of Human Rights at the University of Quebec. Juliette François-Blouin holds a Master’s in international humanitarian law and human rights from the Geneva Academy as well as a Bachelor’s in international relations and international law from the University of Quebec.
To learn more about the Research and Policy Advice click here.