Experts React: Unpacking the Trump Administration’s Plan to Win the AI Race
Photo: Chip Somodevilla/Getty Images
Editor’s Note: Additional expert perspectives were added on July 29, 2025.
On July 23, 2025, President Donald Trump signed three executive orders (EOs) on artificial intelligence (AI). These orders came shortly after the release of the administration’s AI Action Plan and each focuses on one of three AI policy priorities: (1) building AI infrastructure, (2) diffusing U.S. AI technology globally, and (3) removing ideological bias from AI models.
The EOs and the AI Action Plan, which outlines over 100 recommendations for achieving U.S. global dominance of AI, mark the administration’s most detailed articulation of its AI policy agenda to date. In this Experts React, leading experts from CSIS share their analysis of the content and implications of these initiatives.
AI Action Plan: High Marks for Ambition, Question Marks on Execution | Navin Girishankar
White House Issues Twin Executive Orders Tying AI Leadership to U.S. National Security | Kirti Gupta
Coordinating Federal AI Tools for Strategic Advantage | Matt Pearl
Preparing Workers for AI Disruption: An Initial Assessment | Philip Luck
Two Out of Three Isn’t Bad | James Lewis
Powering Hyperscalers at Ratepayers’ Expense | Leslie Abrahams
Investing in AI-Enabled Science | Sujai Shivakumar
Utilizing the Development Finance Corporation to Support AI | Erin Murphy
AI Action Plan Raises Key Export Control Issues and Implementation Questions | Matthew S. Borman
Competing in Developing Countries | Noam Unger and Madeleine McLean
With Policy Lagging, Civil Society Must Confront the Real-World Errors of AI | Carol Kuntz
AI Action Plan: High Marks for Ambition, Question Marks on Execution
Navin Girishankar, President, Economic Security and Technology Department
The Trump Administration’s AI Action Plan sends a strong signal that the United States intends for leadership in artificial intelligence to serve as a source of national strength, economic self-sufficiency, and national security. The plan deserves high marks for its ambition, its full-stack focus, its emphasis on speed and scale, and its commitment to promote American AI globally.
Its scope is necessarily comprehensive. The AI stack is a composite of core elements of U.S. competitiveness—incentives for rapid innovation of frontier models and their application across government, basic research, and industry; the digital networks over which AI applications run; access to energy and skilled labor; protections against dual-use risks associated with key components such as advanced chips; and new alliances that will drive AI-related exports.
The plan, with its more than 90 actions spanning multiple departments and agencies, represents a high conviction (likely low risk) bet on AI as a general purpose technology. It is also a bet that deregulation (to a lesser extent, public-private partnerships) will deliver innovation with safety, productivity with job growth, and American AI dominance with enduring alliances.
There is good reason to expect early successes. The private sector, in particular hyperscalers, is primed with people, money, and innovative products, and a willingness to invest across components of the stack. There are also indications that state and local governments, universities, and private entities across sectors including healthcare, agriculture, logistics, and finance are poised to take advantage of what Microsoft Vice Chair and President Brad Smith calls the “golden opportunity for American AI.”
To meet its ambition, the plan will need to overcome five execution challenges.
First, the administration’s trade agenda—marked by tariff volatility—undermines stable access to critical components to the AI stack, such as semiconductors, which depend on global value chains. Similarly, recent changes in export controls on advanced chips—without a full reassessment of national security risks—raise questions about consistency and predictability for U.S. firms; this is not addressed in the plan.
Second, while investments in a homegrown AI workforce are essential, the near-term surge in labor demand for data centers, fabs, and energy projects will outstrip domestic supply, requiring a fresh approach to immigration.
Third, it will be important to pay close attention to the distributional impacts of how AI infrastructure is built and how AI applications flow through to sectors and the economy. It is reasonable to expect that job displacement, which the plan acknowledges, could be significant and will need creative solutions to address.
Fourth, with respect to U.S. global leadership, the full stack AI export package and related initiatives are innovative, but the capabilities to deliver remain dispersed across agencies. Additionally, the financial solutions lack ambition—a much larger American AI stack fund with participation of the U.S. International Development Finance Corporation (DFC) and hyperscalers would be needed to meet the moment, given competition from China in emerging markets. There should also be an explicit U.S. company nexus on DFC projects and stronger World Bank alignment with U.S. AI standards for its projects in the Global South.
Finally, successful execution will require an uncommon level of strategic coordination—demanding a reimagined role for the National Security Council and National Economic Council to align departments and agencies, as well as public-private efforts, with urgency.
White House Issues Twin Executive Orders Tying AI Leadership to U.S. National Security
Dr. Kirti Gupta, Senior Adviser (Non-resident), Renewing American Innovation
In a pair of EOs published on July 23, the White House took decisive steps to cement the United States’ global leadership in AI, directly linking technological competitiveness with national security interests. The orders underscore the administration’s commitment to maintaining a strategic edge in emerging technologies while deepening collaboration with trusted international partners at a time when geopolitical considerations and trade barriers often muddy the waters for what is and isn’t possible for U.S. firms.
The first executive order, on Promoting the Export of the American AI Technology Stack emphasizes the importance of global markets in sustaining U.S. AI leadership. Acknowledging that adversaries are advancing their own technologies, the EO sets forth a framework to expand the global reach of U.S.-origin AI systems and reduce international reliance on competitor nations. The order calls for proposals of “full stack” AI solutions—encompassing hardware, software, data, cybersecurity, and application layers—for which federal financial support will be made available. To facilitate this effort, the EO outlines a strategy involving multilateral and country-specific partnerships aimed at promoting pro-innovation regulatory environments, enhancing data and infrastructure standards, and removing some of the technical trade barriers that hinder the competitiveness of U.S. offerings.
The second executive order, on Accelerating Federal Permitting of Data Center Infrastructure addresses a domestic bottleneck: the slow pace of permitting and construction of data centers that are vital to training and deploying AI models. In the race towards technological leadership for AI, the deployment of datacenters and supporting their energy requirements is the next critical pillar for domestic leadership, beyond the cutting-edge semiconductor AI chips such as GPUs and AI accelerators, which has been a primary focus of the current and prior administration towards developing manufacturing incentives and limiting tech-transfer to adversaries. The order prioritizes the use of federally owned land and resources and provides financial incentives—ranging from tax breaks to loans and offtake agreements—for projects meeting the threshold of “qualifying projects.” These projects must include at least $500 million in private capital commitments, reflecting a clear push towards public-private partnerships.
Together, the orders present a strategic blueprint to secure U.S. dominance in AI through both domestic infrastructure development and international technology diplomacy. While the themes are ambitious and well-aligned with long-term policy goals, much will depend on how the initiatives are implemented in practice. Execution—across funding, interagency coordination, and international engagement—remains the critical next chapter in realizing the promise of these directives.
Coordinating Federal AI Tools for Strategic Advantage
Matt Pearl, Director, Strategic Technologies Program
In its release of three EOs and an AI Action Plan on July 23, the Trump administration released a massive amount of text on how to advance the United States’ position in this key set of technologies. However, the primary motivating factor behind the administration’s desire for dominance—the People’s Republic of China—received only two relatively offhand mentions in the documents. Lest there be any doubt about the administration’s motivation, consider that it put great focus on advancing “a coordinated national effort to support the American AI industry by promoting the export of full-stack American AI technologies packages”—packages intended to serve as an alternative to the United States’ only true competitor in AI. Other policies, such as promoting “open-source and open-weight AI models” and bolstering critical infrastructure cybersecurity, are unfolding in the context of recent developments out of China, including the release of DeepSeek and the Salt Typhoon hacks.
As always, the success of the Trump administration’s goal will ultimately depend on implementation. Ensuring that all the federal government’s tools—diplomacy, procurement and direct spending, regulation and deregulation (depending on what is called for), intelligence and military—are reliably and effectively leveraged will depend on implementing the policies in these documents in a consistent way, and on the ability of the White House to communicate and coordinate effectively among various departments and agencies. In this regard, it matters less whether the administration uses the National Security Council (the body that has traditionally done this work) or another avenue. The key question is whether it will fully empower a savvy group of professionals in the White House to convene the agencies and arrive at decisions regarding the implementation of the AI Action Plan, and that there will be finality to those decisions so that industry, allies and partners, and other stakeholders can adjust accordingly. There is no doubt that China will adopt a consistent, measured, and coordinated approach—whether the United States stays in the AI game will depend on whether the federal government is able to do the same.
Preparing Workers for AI Disruption: An Initial Assessment
Philip Luck, Director, Economics Program and Scholl Chair in International Business
The workforce provisions in the recent AI Action Plan represent an acknowledgment of AI’s potential labor market disruptions—but they remain limited in scope relative to the scale of the challenge. They provide a foundation for future development, but historical experience with automation and trade shocks suggests that without more comprehensive, better-funded, and better-coordinated efforts, many workers are likely to face significant transitions.
The plan’s workforce elements demonstrate a multifaceted approach to preparing U.S. workers for an AI-driven economy. The Department of Labor (DOL) will establish dedicated funding streams for displaced workers, while new educational initiatives will clarify tax treatment for employer-provided AI literacy programs and expand early pipeline programs in middle and high schools. Perhaps most importantly, the creation of an AI Workforce Research Hub within the DOL signals a laudable commitment to evidence-based policy making, supporting ongoing research and scenario planning that will be essential for understanding AI’s labor market impacts in real time.
While these are constructive steps, they represent an initial response to a complex challenge. Given the potential scale of AI-driven disruption, more comprehensive intervention will likely be necessary. The focus on retraining is important, but the mixed track record of similar programs suggests the need for significant funding, as well as careful design and implementation. Policymakers also need a clearer understanding of which industries and occupations face the greatest risk—especially where impacts are geographically concentrated. This reflects lessons from past disruptions caused by trade and automation.
The current approach deserves recognition for proactively addressing workforce challenges rather than treating them as secondary to innovation policy. Yet the ultimate test will be whether these foundational investments can be scaled to match the magnitude of economic transformation that widespread AI adoption may bring. Success will require sustained political commitment and resources that extend well beyond any single administration’s tenure.
Two Out of Three Isn’t Bad
James Lewis, Senior Adviser (Non-resident), Economic Security and Technology Department
The Trump administration’s AI Action Plan should be welcomed. It recognizes that the terms of global competition have changed. Technology denial and export controls won’t preserve the U.S. lead, but building the global AI infrastructure puts U.S. technology at the center of the digital economic revolution. Domestically, NIMBYism (“Not in My Backyard”) is an increasing problem for building the data centers and AI infrastructure that the digital economy needs, so clearing regulatory obstacles is indispensable.
All administrations have internal tensions and disputes. One of the three EOs takes aim at “Woke AI,” reflecting a preoccupation for this administration. It’s unnecessary, but perhaps part of some internal political trade to win support for the rest of the package.
The Action Plan is not for the faint of heart. It does not obsess over potential (and often imaginary) AI risks. For perspective, remember that the Doomsday Clock has said we are only minutes away from nuclear annihilation—and had been saying so for the last 78 years. What isn’t exaggerated is that the country that leads in building AI will be the center of the global economy (the administration prefers the word “dominate”). The Action Plan is an opportunity to strengthen U.S. leadership in AI while protecting national security with reasonable restraints on chip manufacturing. Other nations will develop AI with or without U.S. help, so the strategic imperative recognized by the Action Plan is to ensure that U.S. companies, rather than competitors in China, capture the expanding global market for AI services and infrastructure.
Powering Hyperscalers at Ratepayers’ Expense
Leslie Abrahams, Deputy Director and Senior Fellow, Energy Security and Climate Change Program
The AI Action Plan and its accompanying EO, Accelerating Federal Permitting of Data Center Infrastructure, correctly identify electricity as a key AI bottleneck but misguidedly prioritize speed of energy infrastructure for AI expansion above all else. The energy demand of data centers has thrust electricity to the center of U.S. economic competitiveness, requiring revised permitting and financing approaches to expanding the grid. However, load growth from AI is just the beginning; additional demand from advanced manufacturing and electrification could as much as double U.S. electricity demand by 2050. The challenge is therefore not just to meet AI energy demand, but to strategically build a robust, affordable, and reliable power system that will secure long-term, sustained U.S. competitiveness. This requires a holistic approach to grid expansion, rather than a carve-out for an individual technology sector.
Environmental and climate impacts aside, by offering discounted federal land and accelerated permitting, this approach reduces costs for tech companies while leaving ratepayers to foot the bill when utilities distribute the capital costs across all customers. The EO also takes away federal funding from potential energy projects that could have benefited the broader customer base; instead of providing loans and grants for projects where they are needed most, the EO calls for agencies to reprioritize available public financial support to specifically build AI-related infrastructure. This again subsidizes energy projects for hyperscalers—which have the means and the motivation to pay high costs for electricity—rather than financing infrastructure projects that would more cost-effectively address grid congestion system-wide.
While accelerated permitting, among other reforms, will be necessary to meet growing U.S. energy demand, this EO lacks guardrails to ensure that data center developers, rather than general ratepayers, finance the generation and transmission needed to serve their loads—potentially shifting costs and reliability risks onto households and small businesses.
Investing in AI-Enabled Science
Sujai Shivakumar, Senior Fellow and Director, Renewing American Innovation
In today’s era of intense technological competition, the speed of innovation is a decisive advantage. Emerging technologies such as AI, robotics, and high-performance and quantum computing accelerate research and development (R&D) and production timelines—and the countries that deploy these tools first—will lead in economic competitiveness and national security. How should the United States proactively develop an integrated innovation ecosystem that combines massive computational power, seamless data flows, and interoperable robotics capabilities?
The AI Action Plan’s call to “Invest in AI-Enabled Science” signals a step in the right direction. The plan appropriately acknowledges the transformative potential of AI to accelerate next-generation scientific discovery and correctly identifies a pressing infrastructure gap in the secure computing environments, cloud capabilities, and high-quality datasets required to support autonomous discovery systems.
To translate this vision into meaningful leadership, the United States must act strategically and at scale. A case in point is the emerging field of self-driving laboratories (SDLs), automated labs where AI and robotics work in tandem to generate and iteratively test hypotheses in real time. SDLs are already demonstrating the ability to accelerate materials discovery, capabilities with direct relevance to U.S. competitiveness in clean energy, critical minerals, and advanced manufacturing. However, at present, the United States does not have a clear funding policy or program for advancing SDLs. Total U.S. spending on SDLs is less than $50 million and is not done in a directed, programmatic manner. In contrast, Canada recently committed $200 million to establish a national SDL research hub at the University of Toronto—the largest-ever research grant for SDLs.
If the United States intends to capture and capitalize on these breakthroughs, it must not only invest in new AI technologies but build the foundation to harness them as they get developed. Strengthening the bedrock for experimental automation and AI-enabled science starts with strategic and sustained investments and coordination in secure and scalable enabling R&D infrastructure.
Utilizing the Development Finance Corporation to Support AI
Erin Murphy, Deputy Director and Senior Fellow, Chair on India and Emerging Asia Economics
Pillar III of the AI Action Plan envisions the DFC—as well as the U.S. Trade Development Authority and the Export-Import Bank of the United States—playing a leading role in promoting engagement, investment, and the diffusion of U.S. technology overseas. Competing with China in the infrastructure and technology space has been a chronic problem for the United States and its partners and allies. The United States has been unable to offer competitive alternatives to the low-priced mobile devices, networks, and services that high-risk vendors like Huawei and ZTE bring to foreign countries, particularly in the Indo-Pacific. The AI Action Plan looks to the Department of Commerce to work with these financing and trade promotion agencies to source deals; however, that is easier said than done, especially for the DFC.
The Better Utilization of Investment Leading to Development (BUILD) Act—which was passed in 2018 and created the DFC, which launched operations in December 2019—was designed to breathe new life into the U.S. government’s efforts to engage overseas economically and to offer a counterweight to China’s Belt and Road Initiative (BRI). It gave the DFC the authority to invest in development-focused, commercially viable, and private sector–led projects in low- and lower-middle-income countries. These requirements will shape where and how the DFC can invest—a task that is all the more challenging when identifying bankable private sector deals, which are few and far between in many of these countries.
The DFC can provide financing for deals, including debt and equity financing and funding support for AI infrastructure, such as power generation and electrification projects, on-lending to tech companies, and the construction of data centers. The DFC has supported a handful of information and communications technology projects, including data centers for digital communications in Africa. It has also supported several energy projects, including solar power projects in India and transmission and grid systems for reliable power supply in Mozambique. Congress has the opportunity to provide further flexibility to the DFC to support the AI Action Plan with the reauthorization due this October; Congress can provide the equity fix necessary for the DFC to deploy this tool more effectively, raise the spending cap, and broaden the countries where the DFC can provide financial support.
AI Action Plan Raises Key Export Control Issues and Implementation Questions
Matthew S. Borman, Senior Technical Expert (Non-resident), Economic Security and Technology Department
The Trump administration’s recently published Winning the Race America’s AI Action Plan and related EO—Promoting the Export of the American AI Technology Stack—direct a process of up to 180 days to obtain and evaluate proposals from industry consortia for full-stack AI export packages and recommended export control actions. The plan and EO raise several implementation issues and questions, including the following.
Timing
- Will any export licenses for advanced computing chips be processed while the proposal process and evaluations are pending? Although proposals will be evaluated on a rolling basis, at maximum, proposals could be submitted up to 180 days after July 23.
- Will export license applications be held without action pending implementation of location verification and enhanced monitoring?
Process
- Will an export license or licenses be required for export for approved proposals, or will the interagency approval of proposals suffice?
- Will the process reopen the proposal period to accommodate AI data centers developed after the initial 180 period?
Enforcement
- Does the technology exist to require location verification without impacting the performance and security of the chips?
- Is the Department of Commerce properly resourced to conduct enhanced monitoring?
Tools
- Has the Department of Commerce identified the key uncontrolled foreign components for Chinese semiconductor manufacturing equipment?
- Can those key components be controlled through extraterritorial controls or tariffs?
- Has the Department of Commerce evaluated uncontrolled foreign tools that warrant restriction?
Plurilateral Controls
- Have engagements with key partners and allies on aligning export controls resumed?
Competing in Developing Countries
Noam Unger, Director, Sustainable Development and Resilience Initiative and Senior Fellow, Project on Prosperity and Development, and Madeleine McLean, Program Manager and Research Associate, Sustainable Development and Resilience Initiative and Project on Prosperity and Development
The Trump administration is right to focus on promoting the export of full-stack AI technology packages to “decrease international dependence on AI technologies” developed by adversaries. The geopolitical competition between the United States and its adversaries, however, is often contested within developing countries across the Global South. This is an especially important consideration in light of projected labor demographics and growth, particularly across Africa.
What the newly released approach currently lacks is an acknowledgement of the potential for AI to either expand or narrow the digital divide, and the rising demand across developing countries to build out their own sovereign AI systems. Already, innovators in less advanced economies are building tools to address local challenges—ranging from chatbots programmed to support student learning to platforms providing agricultural advice to local farmers—often with the help of U.S. AI models such as OpenAI’s ChatGPT or Meta’s Llama. The United States must capitalize on—and strengthen—such partnerships.
The order to promote exports is also silent on the stark capacity limits in many active and potential partner countries, particularly in low- and middle-income countries (LMICs). Limited energy access, infrastructure, and digital literacy skills pose significant challenges for many LMICs and must be addressed as part of the U.S. strategy if it is indeed going to, as the EO contends, “ensure that American AI technologies, standards, and governance models are adopted worldwide.” An additional aspect to monitor will be whether the platforms the U.S. exports will propel models that allow local developers to modify or build upon to more accurately address local challenges.
The U.S. government’s approach needs to take these developing country factors into account at a time when the administration’s stance towards development has led to dramatic cuts. Although the U.S. Agency for International Development (USAID) had intentionally built expertise in engaging with Silicon Valley on scaling applications of innovative technology over the past 15 years, beginning with the creation of the Global Development Lab to support USAID missions worldwide, much of this talent and knowledge base was abruptly dismissed as associated programs are being dismantled.
In spite of such challenges, the U.S. Department of State should think through how to best include the developing country angle on promoting AI tech exports. Per the executive order, the secretary of state, in consultation with the Economic Diplomacy Action Group, is responsible for many aspects of developing and executing a new unified strategy. This will require a daunting degree of expertise and coordination with agencies like the U.S. International Development Finance Corporation, the U.S. Trade and Development Agency, and the Export-Import Bank, even as the Department of State tries to sort out how to digest the remnants of U.S. foreign assistance and recover from lost capacity as a part of its own internal reorganization effort. With the elimination of the Office of the Special Envoy for Critical and Emerging Technologies under the current Department of State reform proposal, a realigned Bureau for Cyberspace and Digital Policy is likely to play a leadership role. With regard to AI stack exports to developing countries, it will have to collaborate closely with the newly proposed under secretary for foreign assistance, humanitarian affairs and religious freedom.
LMICs are at different stages in their digital transformations, and from the guidance laid out by the White House so far, it is not clear that any of the envisioned industry-led consortia packages will engage with developing countries even as China strenuously courts such contexts with alternative open AI platforms and infrastructure packages.
With Policy Lagging, Civil Society Must Confront the Real-World Errors of AI
Carol Kuntz, Adjunct Fellow (Non-resident), Strategic Technologies Program
AI continues to advance more quickly than policy efforts to shape it in the public interest. This unhappy reality has been true in the United States for most of the technology’s life outside of the laboratory. It remains true after the enactment of President Trump’s EOs on the technology last week.
These orders concentrate on the competition between the United States and China for dominance in this important technology—a worthy focus. But effort also needs to be made to ensure that all users of AI algorithms can identify, measure, and mitigate errors in these complex algorithms.
A fundamental message from the EOs is that all the other actors who could shape that technology in the public interest should start doing so.
Civil society may perceive that it has its hands full with all the many problems it is attending to or at least grieving in this political and strategic moment. But at least portions of it probably need to step in to shape, as much as possible, the use of this extraordinary technology.
Every user of AI in academe, in a nongovernmental organization, in a business, or in a federal agency needs to recognize that these algorithms, while truly remarkable, are vulnerable to errors of various sorts.
The technology’s reputation for complexity is well-deserved. The set of errors it is likely to cause in any substantive domain, though, is analytically tractable.
No user of AI in any substantive domain can responsibly ignore the risk of errors. Users in organizations with IT budgets—academe, federal agencies, and businesses—should insist that some of that budget be used to hire experts who can identify the risks or reality of errors in any algorithm used by the organization. These experts should help the users measure and mitigate these errors when they are found. Foundations could fund the creation of tools that are then made available to individuals to help them identify, measure, and manage errors that could affect them.
My recent CSIS report made this argument in the context of the use of algorithms by the U.S. Department of Defense (DOD). It warned that some algorithms could be transformative in war, reigniting classic arguments between realists, liberals, and humanitarians about the acceptability of warfare with the remarkable speed and precision putatively available with AI.
It also warned that are a host of generic errors that could afflict AI algorithms in any domain, but that could have particularly tragic effects with the use of algorithms in war.
The report acknowledges the tremendous benefits that AI algorithms could bring to U.S. warfighting capabilities but urges that the incorporation of AI be accompanied by the development and use of analytical tools to ensure that DOD’s use of AI is consistent with military effectiveness and law of war compliance.
The DOD may have a unique use case for AI algorithms. But there are many high-consequence uses of algorithms in society today, and surely every user should be armed with some tools to ensure that they do not confront these mighty algorithms alone.
Erin L. Murphy