A Security Perspective on U.S. National Labs’ AI Partnerships
Photo: Chip Somodevilla/Getty Images
The Trump administration is investing in the Department of Energy’s (DOE) National Laboratories to accelerate artificial intelligence (AI)-enabled science. Through the 2025 AI Action Plan and the One Big Beautiful Bill Act, Washington has formalized a push for National Labs to collaborate with frontier AI companies. These labs, best known for developing the atomic bomb, remain central to the U.S. nuclear deterrent. Entangling AI companies with institutions that have long defined the nation's strategic technological edge carries important national security implications. Close public-private collaboration in this area creates opportunities to strengthen defense R&D and raises the cost of intellectual property theft, however it also widens the surface for cyber attacks, exposing vulnerabilities that uneven security practices could leave open. Safeguarding these partnerships will require that all involved actors prioritize cybersecurity.
The Logic Behind AI Partnerships
The AI Action Plan lays out the strategic logic behind the partnerships. It presents AI as both a national asset and a scientific accelerator, and argues that research infrastructure must evolve to support it. Scaling scientific experimentation requires building large, high-quality datasets that AI systems can readily use. These capabilities will generate new discoveries and original data, which can feed directly back into AI development. While the Action Plan notes that this priority will be expanded in the forthcoming National AI R&D Strategic Plan, it is already backed by funding: the One Big Beautiful Bill Act allocates the DOE $150 million through 2026 to mobilize its National Labs in partnership with industry to create what the bill calls “transformational artificial intelligence models.”
National Labs offer two critical assets on this front. First, they hold decades of archived scientific data—much of it raw or unstructured—that can be optimized for training AI models. The administration’s proposed American Science Cloud would make this data more accessible across government, academia, and industry. Once refined, these datasets can train AI systems that streamline research workflows. That integration is already underway at Lawrence Livermore, Los Alamos, and Sandia, three National Nuclear Security Administration (NNSA) labs. These efforts are anchored in the DOE's Frontiers in AI for Science, Security, and Technology initiative, and further energized by the Trump administration’s strategic AI policy priorities.
Second, they have the ability to generate entirely new data through complex, high-resolution 3D physical simulations. National Lab supercomputers like Venado and El Capitan can model advanced nuclear reactions, energy systems, material degradation, weapons effects, and other phenomena at scales unmatched elsewhere. The value lies not only in the precision of these outputs, but in the fact that they create original knowledge—measurements that have never existed before. While AI can help design these simulations, running them still requires the labs’ specialized compute. The result is a feedback loop: AI guides experiment design, supercomputers execute them, and the resulting high-fidelity data can be fed back into model training.
This recursive capability matters because today’s frontier models face bottlenecks in high-quality training data. Scaling laws show that performance improves with more compute and richer datasets, yet firms are hitting limits on what can be scraped from the internet or generated synthetically. Nobel laureate David Baker, reflecting on his breakthroughs in AI-driven protein design, credited much of his success to the exceptional quality of biology databases: “It's not just the methods, it's the data. And there aren't so many places where we have that kind of data.” National Labs are positioned to generate the kinds of advanced, domain-specific datasets Baker described in fields beyond biology. This is part of the logic behind Sec. 50404 of the One Big Beautiful Bill, which calls for “self-improving AI models,” in which novel discoveries made by labs trains models that then help design future experiments. Just as AI can support science, science can support AI.
National Security Implications
Unlike past transformational technologies—such as the internet, microprocessors, nuclear weapons, space exploration—which were heavily developed under the Department of Defense, frontier AI has been driven by private firms. Its pipeline has been shaped more by commercial incentives than by strategic interests. While the goal of these policies is to accelerate scientific progress, the entanglement of these companies with National Labs has implications for national security. It raises the stakes for cyber-enabled intellectual property theft—potentially deterring some adversaries, while heightening escalation risks if an attack occurs. It broadens the attack surface as sensitive data moves across a wider network of actors with uneven security practices. And it creates new opportunities to integrate frontier AI directly into U.S. defense and deterrence capabilities.
The fusion of private frontier AI development with NNSA infrastructure changes the strategic calculus for cyber operations. While classified nuclear weapons systems remain on air-gapped, closed networks, they are not immune to cyberattacks. Additionally, the broader unclassified research and support environments around them are not hermetically sealed. AI training pipelines, scientific modeling clusters, data staging areas, and cloud-based collaboration tools often operate in less restricted spaces. Co-locating these with frontier AI development creates grey zones where an intrusion aimed at commercial AI intellectual property could also yield intelligence about nuclear-adjacent capabilities.
Adversarial actors are known to use cyber intrusions to steal intellectual property. The risk is especially acute for AI. The U.S. holds a decisive advantage in compute infrastructure, making unreleased model weights and training data high-value targets. For adversaries lagging behind, stealing this kind of intellectual property can accelerate their own model development. FBI officials warn that China and other state actors are actively targeting U.S. AI firms, and OpenAI has reported breaches of its internal systems aimed at accessing sensitive model design details.
Even if the attacker’s goal is purely commercial espionage, activity inside these dual-use environments could plausibly be read as reconnaissance of nuclear systems. Cyber operations already unfold in a messy space where domain boundaries blur and attribution is hard to pin down. If an intrusion, however limited, appears to intersect with nuclear stewardship systems, the proportionality threshold for response shifts dramatically. What might otherwise warrant a measured response—or no response at all due to attribution concerns—could instead be judged as a significant provocation, triggering retaliation without some of the usual brakes in the decision-making process. That prospect forces adversaries to factor higher potential costs into their cyber risk calculus.
As the costs of penetrating hardened lab systems rise, adversaries have greater incentive to target softer links in the chain. The Administration’s plan envisions extensive data sharing; the proposed American Science Cloud would make robust scientific data interoperable across government, industry, and academia. This coordination multiplies potential access points for cyber intrusion. Each new institutional partner or cloud integration adds another vector for exploitation, creating opportunities for adversaries to exfiltrate valuable data held by industry without ever breaching NNSA infrastructure directly.
State-linked groups have already demonstrated the value of exploiting these weak points. PRC-affiliated actors like Silk Typhoon have used stolen API keys and compromised cloud accounts to harvest sensitive data from high-value targets. In a federated ecosystem where datasets flow through actors with uneven security practices and oversight, a breach at one node can cascade into loss of strategically significant AI assets. The exact architecture of the American Science Cloud has not yet been developed, but its design must treat security as a core requirement on par with interoperability.
While integrating frontier AI into National Labs introduces new risks, it also creates opportunities for advancing U.S. defense capabilities. Co-location allows these models to be trained and tested in high-performance computing environments designed for national security missions. In practice, this means AI can be embedded into simulations that reveal vulnerabilities in the nuclear stockpile before they escalate, or into defensive systems that detect and counter cyber intrusions in real time.
These advantages are already in play. At Sandia, researchers are developing AI “killer apps” for threat detection and decision support in nuclear command-and-control environments. Los Alamos runs OpenAI’s reasoning models on the Venado supercomputer to accelerate national security research. Lawrence Livermore’s Global Security Directorate applies advanced computing and sensing to track proliferation activity and harden critical infrastructure. In each case, embedding AI in these missions ensures that these gains feed directly into U.S. deterrence and response strategies.
Recommendations
Cybersecurity must be prioritized as a defining feature of these partnerships. National Cyber Director Sean Cairncross recently outlined the Administration’s immediate cyber priorities, which include streamlining federal regulation for companies and promoting secure by design practices. To ensure these ambitions extend to strategic initiatives like the AI Action Plan and the American Science Cloud, lawmakers should:
- Reauthorize CISA 2015. Congress must move to renew the Cybersecurity Information Sharing Act before it expires. The Act underpins threat information sharing between federal agencies, including National Labs, and private entities, providing liability protections that encourage participation. Allowing it to lapse would weaken visibility into threats targeting the nation’s most valuable data.
- Classify frontier AI as critical infrastructure. U.S. policymakers should formally designate frontier AI companies under CISA’s critical infrastructure framework. This step, already urged by CSIS colleagues, would extend federal threat intelligence sharing, rapid incident response, and other protections commensurate with persistent state-backed targeting. Given the growing integration of AI development with defense stewardship, protecting AI intellectual property is a matter of strategic infrastructure security.
- Secure the American Science Cloud by design. Membership in the Science Cloud should require rigorous approval and identity verification before granting access to lab-generated datasets, with the most sensitive information restricted to the labs themselves. Building security and privacy in from the start will be essential to prevent interoperability from becoming a liability.
Partnerships between National Labs and frontier AI firms will be central to sustaining America’s technological edge, but public plans have only defined broad strategic ambitions. The next step is to translate broad plans into clear operating frameworks with cybersecurity built into every layer by design.
