Are Peace and Security Leaders Ready for AI?
Artificial intelligence (AI) is reshaping the peace and security sector amid a fracturing multilateral order. In this rapidly changing context, leadership is emerging as a decisive factor in determining whether this technology’s promise is realised and its risks contained. What is required to lead effectively in an AI-accelerated world? Geneva’s hosting of the 2027 AI Summit will provide a significant opportunity to bring greater clarity to this increasingly urgent question.
Few developments are as unsettling as AI, yet few hold such promise. What kind of leadership is required to harness the benefits and mitigate the risks of a technology that is so rapidly evolving and so hugely disruptive? Nowhere is this question more consequential than in the realm of peace and security, where leadership failures can quickly translate into instability, rapidly escalating crises, and even existential harm.
For some observers, “the challenges of tomorrow demand a new type of leader”, as Silicon Valley strategist Dex Hunter-Torricke put it in October 2025 at the 6th AI Policy Summit in Zürich. But what precisely this new leadership archetype should embody remains unsettled. The upcoming 2027 AI Summit in Geneva is well positioned to constructively advance this debate. Specifically, while AI’s impact on corporate management dominates current discussions, its implications for leadership in peace and security contexts warrant closer scrutiny.
Rethinking leadership for the AI age
Relentless as technological progress may be, leaders can adapt to harness its potential. Although no single blueprint exists for navigating widely diverse, AI-saturated peace and security environments, a set of leadership competencies is emerging that must be strengthened – and at times reimagined – across the cognitive, interpersonal, cultural, and strategic domains.
- Cognitive autonomy
As escalating tensions between the United States, Israel, and Iran usher in a “new era of bombing quicker than ‘the speed of thought’”, the spectre of hyper-acceleration extends far beyond the battlefield. In such an AI-accelerated world, automation carries a critical yet often-neglected cost: the gradual atrophy of human independent thinking through what is known as cognitive off-loading.
When individuals routinely delegate mental work to external devices, their critical thinking skills erode over time. As a result, AI hallucinations (i.e. confident, plausible-sounding, but false or fabricated information generated by AI models) and system errors may pass unnoticed, while leaders may grow less confident in challenging machine-generated judgements. A 2023 UNIDIR report on military AI cautions that if systems fail due to power loss or kinetic interference, operations may collapse not only because the technology is unavailable, but because personnel will no longer be trained to perform the complex cognitive tasks delegated to AI, nor will be accustomed to doing so.
Cognitive autonomy, therefore, is emerging as a hallmark of effective leadership in the age of AI. Rather than deferring to algorithmic outputs, leaders must continually act as discerning decision-makers, critically evaluating where to adopt AI, where to restrain it, and how to manage its risks. While sector-specific AI literacy is vital for leveraging tools like predictive conflict modelling or rapid situational awareness, it must be complemented by organisation-wide practices of questioning and – when necessary – over-riding machine recommendations.
- Human connection
As AI reshapes everything from diplomacy, to defence, to daily teamwork, leadership that prioritises human connection remains indispensable. Recent studies, including 2025 research by MIT Sloan, reveal that human-AI teams frequently make worse decisions than either skilled humans or AI working alone. The missing link is primarily human-centred leadership that treats AI not as a substitute for human effort or connection, but as a catalyst for unleashing human potential.
Paradoxically, AI elevates the importance of human strengths like emotional intelligence even as it expands the technical demands placed on leaders. Meeting this dual imperative requires greater attention to employees’ needs, such as feelings of belonging, psychological safety, and shared purpose. Neglecting these needs risks fracturing teams, particularly when AI is introduced not just as a tool, but as an agentic team member designed to function as a digital colleague, rather than just a passive tool, that can take independent action to achieve specific goals. In these hybrid environments, weakened human connection can foster isolation, algorithmic opacity can breed distrust, and the misalignment between machine logic and human reasoning can trigger communication breakdowns.
For example, in peace dialogues, there is a tension between automation and human relational needs. However powerful, AI is largely unable to generate the trust or psychological safety required to promote mutual understanding and reconciliation among dialogue stakeholders. Moreover, AI-driven moderation or synthesis tools risk missing cultural subtleties or misconstruing the layered signalling typical of diplomatic texts. To prevent these blind spots from derailing negotiations, leaders must use their own emotional intelligence and cultural awareness to contextualise AI outputs.
- Strategic foresight
Once an organisation’s leadership understands the implications of AI use, that leadership faces two critical questions: what should the organisation’s AI strategy be, and which policies will translate that strategy into secure practice? Deferring an answer invites what has been called “Shadow AI”; i.e. employees’ use of unauthorised tools, which, unlike external cyber threats, is an internal organisational vulnerability. Evidence suggests that it is widespread, and carries severe risks to security, data privacy, and regulatory compliance.
For example, diplomats might be tempted to use unvetted AI for routine tasks like translation, but this jeopardises the strict confidentiality their work demands. Similarly, external AI service providers could aggregate prompts to deduce a state’s strategic priorities, transforming everyday queries into intelligence exposures.
Mitigating these risks demands a cohesive AI strategy translated into practice through clear policies. Yet, strategy is difficult to craft for a technology that is evolving as rapidly as AI, especially in the volatile peace and security domain. Static roadmaps will make organisations reactive rather than proactive, leaving them ill-prepared for tomorrow’s disruptions, many of which will stem from rapidly increasing AI implementation and use.
Foresight capabilities offer a way out of this conundrum. They will enable leadership to develop what a 2026 World Economic Forum report calls “living plans”: adaptive frameworks that anticipate technological and geopolitical shifts while remaining anchored in clear objectives and ethical principles. This, in turn, demands what has been called “ambidextrous leadership”; i.e. leadership capable of leveraging current AI capabilities while preparing for future transformations.
- Cultural readiness
Realising AI’s value requires more than strategy: it demands a cultural shift toward continuous learning and organisation-wide experimentation. Innovation must become a shared responsibility across an entire organisation. Because AI champions often emerge through practice rather than formal appointment, leadership must encourage experimentation at all organisational levels. By creating feedback loops that openly share lessons learned from AI experimentation, leaders can determine where to invest strategically, model the desired behaviours, and help them cascade across the entire organisation.
As part of the efforts needed to shift cultural practices to deal with the increasing use of AI, storytelling can be a powerful lever, provided that communication is reciprocal. When leaders pair clear expectations with receptivity to employee feedback, they build the psychological safety necessary for learning across hierarchies, disciplines, and generations.
In peacebuilding, such safe spaces will empower local staff to blend AI outputs with contextual knowledge, adapting technological solutions to on-the-ground realities. Only then can advanced tools – like sentiment analysis or topic modelling – reliably help policymakers and practitioners to identify emerging challenges and deliver context-sensitive responses.
Conclusion: on the road to the 2027 AI Summit
As the 2027 Geneva AI Summit approaches, one of Switzerland’s central aims is to equip stakeholders to harness AI effectively. Doing so will require stronger leadership capabilities, both to seize AI’s opportunities and navigate the challenges it presents.
Although much more work is needed to understand AI’s nuanced, sector-specific impacts, several leadership competencies are already emerging as broadly essential, even in a field as diverse as peace and security. In an AI-accelerated world, leadership must pair cognitive autonomy with sector-specific AI literacy and cultivate a culture of experimentation guided by strategic foresight. While technical proficiencies may quickly become obsolete, human-centred skills will remain the enduring anchor of effective leadership amid rapid and increasingly transformative technological change.
Disclaimer: The views, information and opinions expressed in this publication are the author’s/authors’ own and do not necessarily reflect those of the GCSP or the members of its Foundation Council. The GCSP is not responsible for the accuracy of the information.
