Legal Implication of Bias Mitigation

Legal Implication of Bias Mitigation

Legal Implication of Bias Mitigation

By Juliette François-Blouin, Programme Officer with the Security and Law Programme at the Geneva Centre for Security Policy (GCSP)

The presence of bias in artificial intelligence (AI) systems has been widely discussed in recent years following reports of racial and gender bias in applications destined for the civilian sector. In parallel, conversations have taken place on how bias could manifest in intelligent systems destined for military use. This concern has been reflected in several of the ethical guidelines and principles for responsible AI published by States and international organizations. Many of these publications include the principle of bias mitigation, which calls for the minimization of unintended bias in the development and use of AI applications.

Although bias is primarily a technological issue, it has wider consequences, including in the legal sphere. This post explores the legal implications of bias in AI systems used by armed forces. Specifically, it analyzes its possible ramifications for the international humanitarian law (IHL) principles of distinction, proportionality and precautions in attack.

Juliette François-Blouin is a Programme Officer with the Security and Law Programme at the Geneva Centre for Security Policy (GCSP).

Disclaimer: This publications is part of a symposium organised by GCSP in partnership with the Articles of War blog of the Lieber Institute. The views, information and opinions expressed in this publication are the author’s/authors’ own and do not necessarily reflect those of the GCSP or the members of its Foundation Council. The GCSP is not responsible for the accuracy of the information