AI and the Future of Conflict
Research on how AI systems are reshaping conflict dynamics—from military operations and strategic decision-making to diplomacy, mediation, and peacebuilding—alongside practical methods to evaluate these tools before they are trusted in high-stakes settings
Artificial intelligence, ranging from machine learning and large language models to agentic systems and autonomous platforms, is changing how power is generated and applied. It is not only altering tactics on the battlefield, but also the strategic calculus behind national decisions, alliance behavior, escalation management, and crisis response. Futures Lab research connects emerging AI capabilities to real operational and policy workflows, with an emphasis on what is changing, what is not, and what risks grow as AI becomes embedded in security institutions.
Futures Lab's research on AI is organized around three lines of effort:
Across militaries and governments, AI is reshaping operational advantage and institutional design. This line of effort examines how AI-enabled tools affect command and staff processes, force planning, information operations, and professional military education, alongside how AI-driven wargaming and decision support can shape how leaders perceive options, risks, and time. It also assesses how agentic AI may shift organizational structures in defense and foreign policy institutions, and how these shifts affect speed, accountability, and control in high-pressure environments.
Trust in AI starts with measurement. In national security and crisis settings, where “ground truth” is contested, data are sensitive, and outcomes depend on context—standard evaluation approaches are often insufficient. This line of effort advances benchmarking methods that stress-test AI systems for reliability, robustness, bias, and risk under realistic conditions (e.g., uncertainty, missing information, adversarial pressure, and degraded communications). It includes assessing whether models exhibit decision-relevant failure modes tied to escalation dynamics, such as overconfidence, misreading adversary intent, premature use-of-force recommendations, or instability under time pressure, before integration into planning or advisory workflows. The goal is to ensure AI systems increase performance and resilience rather than introduce new vulnerabilities.
Demand for mediation and peacebuilding is rising, while practitioners face persistent constraints: information overload, fragmented data, time pressure, and the need to maintain trust and confidentiality. This line of effort explores how AI can support mediators and diplomats with bounded, practical capabilities—organizing large document sets, mapping stakeholders and issues, tracking proposals across drafts, and generating scenarios that expand the negotiation space—without replacing human judgment or relationship-building. Emphasis is placed on secure use, careful deployment, and practitioner-informed evaluation to ensure tools are usable in real mediation contexts.
Contact Information
- Jose M. Macias III
- Associate Fellow, Futures Lab, Defense and Security Department
- JMacias@csis.org
All AI and the Future of Conflict Content
Filter by
AI and Grand Strategy: The Case for Restraint
Report by Erica D. Lonergan and Benjamin Jensen — January 28, 2026
The U.S. Army and a Second Manhattan Project for AI
Commentary by Jake S. Kwon and Benjamin Jensen — November 21, 2025
Will Trump’s Peace Plan for Ukraine Succeed?
Critical Questions by Benjamin Jensen and Yasir Atalan — November 21, 2025
What Could a Trump Peace Plan in Ukraine Look Like?
Critical Questions by Benjamin Jensen and Yasir Atalan — October 17, 2025
What Does Lethality Really Mean in Modern War?
Critical Questions by Benjamin Jensen — October 2, 2025
Channeling Augustus: On Agentic Offensive Information Operations
Commentary by Erol Yayboke — September 19, 2025
Benchmarking as a Path to International AI Governance
Commentary by Ian Reynolds — August 5, 2025
Why Tocqueville Would Embrace AI Benchmarking: Charting a Path for the Future of Democracy in the Age of Artificial Intelligence
Report by Benjamin Jensen and Ian Reynolds — July 28, 2025
Toward Reliable AI, from the Bottom Up
Commentary by Ian Reynolds — July 28, 2025
Deterrence Runs on Rare Earths
Commentary by Jake Kwon and Benjamin Jensen — July 24, 2025