A successful roundtable discussion on "The impact of autonomy and artificial intelligence in defense systems on future warfare and conflicts" at NATO organised by the Geneva Centre for Security Policy and the Swiss Mission to NATO.
The definition of autonomy, possible uses in weapons systems; legal, ethical and strategic implications of autonomous weapons; malicious uses of AI and the geopolitics of AI were addressed. Swiss ambassador Christian Meuwly opened the roundtable followed by presentations of Dr. Lydia Kostopoulos from ESMT Berlin and Vincent Boulanin from SIPRI, Stockholm.
The impact of autonomy and Artificial Intelligence in defense systems on future warfare and conflicts BACKGROUND As a global emerging challenge for defence and international security issues, Artificial Intelligence (AI) is progressively encompassing every aspect of our everyday life. The impact of AI is covering a very large spectrum in international security and defence affairs ranging from cyber-attacks, counterterrorism, weapons systems, command and control to deterrence and international stability.
Considering the fact that AI applications are still in their infancy but expanding quickly, this technology bears the seeds of outstanding possibilities for the progress of humanity including towards building a safer and more secure world. Remarkable successes have already been achieved in various fields such as speech recognition, image classification, autonomous vehicles, machine translation, locomotion, medical imaging analysis or chatbots. However, we should remain cognisant of the fact that this technology could be misused and potentially lead to very serious consequences affecting populations on a global scale.
There is a pressing need for an international agreement to regulate the military applications of AI. That is why the international community started to deal with the issue of the weaponization of artificial intelligence in 2014 by looking at the impact of lethal autonomous weapons systems (LAWS) on international humanitarian law within the framework of the Convention on Certain Conventional Weapons (CCW) of the United Nations. Since 2014, the UN has held several expert meetings and a consensus seems to emerge that any future weapons systems must retain a meaningful degree of human control. However, there is no agreement about the way to operationalize this concept.
Beyond the impact of LAWS on international humanitarian law, fundamental questions about the impact of autonomy and the weaponization of artificial intelligence on the future of warfare, strategic stability or nuclear deterrence have to be addressed. In particular, issues such as the lowering of the threshold of conflict, unintended escalations, interoperability of Allied and Partner nations as well as the potential of AI to mitigate the impacts of armed conflicts, prevent terrorist attacks, and AI uses for peaceful application in conflict prevention and disaster relief are worth considering.
The PfPC working group on emerging security challenge, co-chaired by Dr. Jean-Marc Rickli, will organize a workshop on this topic but not before the end of 2019 or 2020.