The International Security and Military Implications of Agentic AI
Executive summary
Building on recent advances, agentic artificial intelligence (AI) is an emerging technological paradigm and the next wave in AI development that enables agents to autonomously pursue complex goals and interact with each other with minimal human supervision via the integration of large language model capabilities. Through increasingly autonomous agents, agentic AI is shifting the AI landscape from being a passive, supportive tool towards an active executor that can increasingly define and take courses of action on behalf of a human user. If implemented successfully, agentic AI expands the scope, scale, and complexity of potential AI use cases, including in domains where automation has traditionally proven to be difficult, while transforming human–machine collaboration and delegation. However, while AI agent performance appears to be increasing rapidly, many real-world agentic AI applications are still in an experimental stage, and agents have turned out to be rather limited in their effectiveness. It is therefore important to look beyond the surrounding hype by critically assessing agentic AI’s current state and anticipated capabilities. The analysis in this Geneva Paper aims to provide a better understanding of these issues by mapping the rapidly evolving landscape of agentic AI from its technical foundations to its possible strategic implications.
Agentic AI has an inherent dual-use potential that is bound to transform both commercial and military applications alike. However, uncertainties remain around its future capabilities and pace of development, as well as existing technical limitations and other barriers that could slow down the deployment of agents and hinder broader agentic adoption. In the military sphere, agentic AI can act as an analytical enabler, force multiplier, and disruptor, and provide potential benefits for both offensive and defensive actions. The development of agentic AI is giving rise to the concept of agentic warfare, in which autonomous agents could provide battlefield advantages by playing increasingly important roles across military functions such as intelligence gathering and analysis, planning, logistics, and decision-making. The use of agentic AI both amplifies certain challenges prevalent in existing AI systems and introduces a variety of novel risks and vulnerabilities that are particularly pronounced in high-stakes settings such as the military domain. Before deployment at scale, it is crucial to assess the technology’s trajectory and potential implications for the future of warfare.
This forward-looking analysis explores how agentic AI development takes place amid commercial and military adoption races and intensifying geopolitical competition. The technology and its integration into commercial and military systems create significant implications for international security and strategic stability, and raise questions around the proliferation and misuse of the technology. Despite improving autonomous capabilities, promising use cases, and various anticipated benefits, increasingly autonomous agents also raise significant societal, security, legal, and ethical risks that may threaten the successful, effective, and sustainable implementation of agentic AI if left unaddressed. This requires actors that seek to leverage the potential of AI agents to find a balance between autonomy and security. It also demands regulatory attention and risk mitigation while the technology is still in its early stages and before its deployment at scale.
Disclaimer: The views, information and opinions expressed in this publication are the author’s own and do not necessarily reflect those of the GCSP or the members of its Foundation Council. The GCSP is not responsible for the accuracy of the information.
