Harnessing Edge AI to Strengthen National Security
Photo: Sepia100/Adobe Stock
Artificial intelligence (AI) is often imagined at a grand scale. Policy debates focus on ever-larger models being built in data centers that demand more compute, energy, and talent. That focus is justified: frontier systems are central to national security and geopolitical competition. However, alongside the race to build more powerful models is another arena of progress defined not by scale, but by application.
Edge AI brings inference directly onto the devices and systems where data is generated. By processing information locally, it can act in real time, preserve bandwidth, and reduce exposure to contested networks. These qualities make it a strategic technology for U.S. national security. This article examines various ways in which edge AI can be harnessed to support frontline units and protect critical infrastructure, though its potential applications extend well beyond those explored here.
On the battlefield, edge AI lets forward units process signals faster and continue operating even when communications links are jammed or cut. It also enables autonomous teams, such as drone swarms, to coordinate maneuvers without relying on distant servers. In critical infrastructure, it helps optimize power grids and water systems and provides safeguards to ensure they continue to run safely even when central networks are disrupted by cyberattacks.
In both domains, the technology strengthens resilience by pushing decision-making closer to the edge. To turn edge AI into a lasting strategic asset, the United States must establish clearer frameworks for deploying these systems, particularly within the defense sector. Any such framework must confront the ethical and accountability challenges inherent in autonomous warfighting.
Processing at the Edge
The foundation of edge AI is edge computing, the practice of capturing and processing data as close to its source as possible. Instead of transmitting raw information back to the cloud for analysis, edge systems analyze data on-site through small local servers or embedded processors built directly into devices.
Edge AI takes this model a step further by running artificial intelligence algorithms directly on local hardware. In addition to processing data close to its source, edge AI enables devices to interpret that data and act on it. A drone can classify objects in its camera feed and adjust its flight path, a factory sensor can detect anomalies and shut down a machine, and a wearable can flag irregular heart rhythms, all without waiting on cloud connectivity.
The technical enabler of edge AI is the difference between training and inference. Training a model requires massive datasets and compute power and, as such, takes place in centralized data centers. Once trained, a model is deployed for inference, the process of applying learned patterns to new data in real time. Large general-purpose models such as ChatGPT run in the cloud because they demand enormous compute resources to serve millions of users simultaneously. But smaller, domain-specific models can be compressed and optimized to run on edge processors, such as NVIDIA’s Jetson or Google’s Coral, enabling inference directly on Internet of Things (IoT) devices.
Running inference locally reduces the need for long-distance data transfers. With the number of connected IoT devices projected to exceed 40 billion by 2030, and with 75% of enterprise data expected to be created and processed outside of traditional data centers by the end of this year, network demands will be increasingly immense. If all of this data were routed to the cloud, bandwidth would be saturated by heavy traffic, slowing transfers and driving up costs. Edge deployments can alleviate this burden by transmitting only essential insights upstream.
Beyond structural efficiency, edge AI delivers gains in speed and security. Running inference locally minimizes latency, which is critical in decision-making contexts where milliseconds matter. It also limits the exposure of sensitive information by keeping raw data isolated or contained within closed networks rather than transmitting it across the internet. These qualities make edge AI a strategic capability with relevance for U.S. national security.
Supporting Frontline Units
Modern militaries depend on a range of networks. In U.S. operations, forward units collect data—from radars, electronic warfare kits, drones, satellites—and send it to nearby command posts. A drone relying on GPS for positioning might detect movement and stream video seconds later to a command node dozens of miles away. That node runs algorithms to detect threats and relays targeting data down to artillery units. Modern doctrine often centers on achieving a seamless “sensor‑to‑shooter” kill chain, in which information moves rapidly and securely across domains. That data rides over the electromagnetic spectrum, moving through different waves depending on the environment.
Adversaries can disrupt those networks. In Ukraine, both sides have jammed communications links, degrading command and control. Electronic warfare units target radio and satellite transmissions and often sever the connection between frontline units and their headquarters. They also attack positioning. Russian forces have blanketed areas with GPS jamming, disrupting the navigation systems that drones and precision munitions depend on. The result is a battlespace where situational awareness, navigation, and command can all be interrupted by electromagnetic interference.
The same emissions that enable communication also expose units to attack. Large command posts with antennas, generators, and vehicles stand out on a sensor-saturated battlefield. Ukrainian strikes on Russian headquarters at Chornobaivka showed how quickly command nodes can be located and destroyed once they reveal themselves. Military officials now describe Ukraine as a “graveyard for command and control,” where the signals that enable coordination also expose command posts.
Edge AI applications can help blunt these vulnerabilities. In increasingly disconnected, disrupted, intermittent, or limited (DDIL) battlefield environments, waiting for instructions from a distant command post slows decisions and exposes units to risk. Running inference on forward systems allows them to act locally when links are cut. A radar that instantly classifies an incoming drone, or a vehicle sensor that flags an ambush pattern, gives soldiers information they can act on without waiting for higher command. These capabilities reduce dependence on data flows, limiting both the vulnerability of networks and the visibility of command nodes.
Edge AI also compresses decision time even when networks remain intact. Traditional command links add latency, with information bouncing back and forth before action is taken. Processing data at the edge shortens the OODA loop—the cycle of observing, orienting, deciding, and acting—by turning raw sensor input into usable insights on the spot. Near-instantaneous insights strengthen what frontline units see and act upon. When split-second decisions matter, certain edge applications can make critical differences.
Finally, edge AI enables autonomous and semi-autonomous teaming. In Ukraine, drones are already flying with embedded processors running autonomy software directly on the aircraft rather than relying on distant servers. Their systems, powered by what one U.S. developer calls a “hivemind,” let multiple aircraft coordinate routes and strike decisions locally, even when communications links are denied. China is pursuing similar ambitions. Its defense industry has staged large-scale swarm demonstrations, and military writings emphasize collaborative autonomy as a pillar of future air dominance. Even if the extent of edge processing in these swarms is unclear, the investment signals that both Washington and Beijing see drone teaming as central to the next phase of competition.
Edge AI does not replace centralized command. Rather, it adds a resilience layer. Tactical execution is more robust when devices in the field can operate independently of these network connections. The most practical path forward is a hybrid model that pairs centralized decision-making with some level of protective frontline autonomy that preserves effectiveness when communications are contested.
Protecting Critical Infrastructure
The vulnerabilities that soldiers face on contested networks are parallel to issues facing domestic systems. The networks underpinning the U.S.’s critical infrastructure are prime targets for cyber disruption. In 2024, U.S. officials revealed that the Chinese state-sponsored group Volt Typhoon had infiltrated utilities, water systems, and transportation networks, maintaining long-term access as preparation for sabotage.
Critical infrastructure is vulnerable because its operational systems are not uniformly sealed. Many facilities run on local, closed networks, but growing demands for remote monitoring, vendor maintenance, and cloud oversight have eroded that isolation. These external links improve efficiency, but also create points of exposure. Attackers often begin in information technology networks that handle business functions such as email, where phishing and credential theft are easier. From there, they pivot into the operational technology systems that control physical processes at the critical infrastructure plants. Once inside these environments, adversaries “live off the land,” using legitimate administrative tools to persist and spread.
Cybersecurity experts have advocated for the use of digital twins to harden U.S. infrastructure. A digital twin is a virtual replica of a physical system, such as a power grid or water plant, that mirrors its behavior using live sensor data. These models establish baselines for normal operations, making anomalies easier to spot. They also allow operators to simulate cyberattacks and cascading failures, exposing the weak points whose protection would prevent the greatest disruption. Edge AI can complement this approach by preprocessing the flood of sensor data that feeds the twin. Local intelligence ensures the model reflects conditions on the ground without overwhelming bandwidth or storage, while the twin provides the broader picture for anticipating and preparing for attacks.
Edge AI can also strengthen resilience by giving facilities the ability to act locally when networks are cut or compromised. By running inference directly on-site, facilities can make certain safety decisions without relying on central servers or cloud connections. A substation controller could trigger a protective shutdown, or a water-plant sensor array could adjust valves to stabilize pressure, even if external networks are compromised. This local autonomy is not without risks, but in a crisis, the ability for field devices to function as independent islands is a critical safeguard. It ensures that essential services can continue, or shut down safely, even when adversaries have infiltrated wider networks.
Edge deployments are already emerging in energy and water systems. Energy utilities are using edge devices to manage distributed energy resources such as solar panels and batteries, running optimization locally rather than sending every decision back to the cloud. That speed is crucial for grid management, where balancing supply and demand requires near-instant responses to shifting loads and intermittent renewable generation.
In the water sector, operators have begun adding edge controllers that process sensor data on site, monitoring flow, pressure, or water quality and flagging anomalies more quickly than central systems alone. Catching those irregularities in real time can prevent small faults from compounding into service outages or contamination events. These early deployments are driven by efficiency, but they also show how edge AI can catch faults close to the source and reduce reliance on vulnerable networks.
As with battlefield systems, the goal should not be to abandon centralized networks, but to build a hybrid model. Edge AI adds a resilience layer, ensuring that when adversaries strike, core infrastructure can still function at a minimum level, protect itself from cascading failures, and recover more quickly.
Deployment, Not Just Development
This defense case is incomplete without reckoning with ethics and misuse. As edge AI applications grow more capable, particularly on the battlefield, the question of accountability looms large. Who is responsible when algorithms make life-or-death calls? Military doctrine has begun to address these questions. Phrases such as “human in the loop” appear frequently in defense debates, yet they are not codified in U.S. policy. The Department of Defense’s updated weapons autonomy directive does not require continuous human oversight. Instead, it mandates that commanders and operators exercise “appropriate levels of human judgment” over the use of force. The Pentagon has also moved to formalize ethical guardrails such as traceability and governance into the AI system lifecycle. Clarifying these standards is critical as edge technologies become more deeply integrated into U.S. defense and security operations.
The battlefield and infrastructure examples outlined here are only a starting point. Edge AI could shape national security in countless other ways that will bring parallel risks, including expanding the attack surface, complicating oversight, and creating new tensions between centralized and decentralized systems. Some U.S. agencies have started to recognize this. Cybersecurity and Infrastructure Security Agency (CISA), for instance, has outlined how AI deployment intersects with critical infrastructure security. But its ability to carry out that mission is already under strain: recent workforce cuts have driven out nearly a third of the agency’s staff, undermining the very capacity needed to enforce resilience plans.
Meanwhile, mainstream AI policy debates remain fixated on scaling frontier models, with far less attention to deployment. If the United States wants to secure technological advantage, it must not only build more advanced models, but also craft clear policies for how and where they are applied. That also means ensuring that the institutions tasked with defending critical systems have the resources to execute their mandates. The edge is where resilience will be built, and where restraint will need to be enforced. Neglecting it would mean implementing AI into the national security apparatus on fragile foundations.