What Self-driving Car Operations Can Teach Us about Incorporating AI into Weapons Systems
06 February 2026
Key points
- The incorporation of AI into weapons will face similar reliability issues as self-driving cars, including hallucinations, poor handling of uncertainty, latent failure modes and planning failures.
- Generative AI and agentic AI can introduce uncontrolled non-determinism and lack reliable reasoning, spatial awareness, and self-verification – making them highly unsuitable for weapons systems, where precision and predictability are critical.
- AI-enabled weapons are the highest risk in terms of safety-criticality and non-determinism. This risk profile demands rigorous governance and testing protocols rather than outright bans, given AI’s widespread civilian applications.
- Lessons from self-driving cars show that real-world testing is essential to uncover latent failure modes. Exclusive reliance on simulation for weapons AI could lead to catastrophic oversights.
- To mitigate AI risks, governments and organisations must invest in physical AI test ranges and develop standardised evaluation protocols through global collaborations, ensuring transparency and accountability in military AI deployment
Disclaimer: The views, information and opinions expressed in this publication are the author’s own and do not necessarily reflect those of the GCSP or the members of its Foundation Council. The GCSP is not responsible for the accuracy of the information.
