Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios
Photo: iLab/CSIS
Available Downloads
This transcript is from a CSIS event hosted on July 30, 2025. Watch the full video here.
John Hamre: OK, folks, you better hurry in. We’re going to get started here. Good morning, everybody. My name is John Hamre. And I’m the president at CSIS. And my role here is entirely ornamental. I’m only here to render honors to Michael Kratsios, and to say thank you to him. You all know that he’s a recidivist, you know, thank God. He’s back in government. He had a very distinguished tenure in the first Trump administration, and decided he’d even do it again. And we’re really fortunate to have him here. And Navin Girishankar, who heads our economic security and technology division is going to make a formal introduction.
But I wanted to take a moment to help everyone here appreciate the really important nature of what we’re – what we’re going to talk about today, and celebrate, I should say. You know, you think about the U.S. government and the legal structure around the government, it goes back 250 years. So we have a Department of Agriculture. We got a State Department. We got a Treasury Department. We now have a Defense Department, but it used to be a War Department. We have all this stuff that goes back 250 years. And so the government is structured to deal with things in those stovepipes, in those columns.
But when you get some new – profoundly new technology, like artificial intelligence, it just cuts across everything. And you cannot have a single department take the lead on designing a policy for a government for a brand new thing. This is the hardest thing a democracy does. And the only way to make that happen is to have it done in the president’s office. And then when it’s a science issue, a technology issue, it has to be led by – you know, by the most competent and capable people we have, who both are credible in the science world and also understand government. And that’s why they asked Mr. Kratsios to head up the OSTP.
Personally, I think we need a science department, you know? I think overdue. But then you got to take stuff away from other departments, and that’s where it gets too hard. So this is a remarkable feat to see a very coherent product, a framework, because the hard work is in front of us now. But this framework would not be possible in our system of government that goes back 250 years if we didn’t have OSTP. And we celebrate the fact that you’re leading it, Michael. Thank you.
Let me turn to Navin Girishankar for the introduction.
Navin Girishankar: Well, thank you, Dr. Hamre. And really delighted that you all can be here. And very, very warm welcome to Director Kratsios for coming, spending his time here, and for some of his team who briefed us this morning.
I would just add, we have a department focused on economic security and technology issues. And I think it’s very, very clear that AI is everybody’s business. The ambition, and at the same time the specificity of the plan, are really quite striking. And we’re going to hear a lot more about it today. I will simply say that, you know, Director Kratsios, for those who don’t know, is director of the White House Office of Science and Technology Policy and senior advisor to the president. He’s the key force and – one of the key forces behind this action plan. And so this is the right conversation to have at this moment, a very consequential one. And in his previous incarnation in the previous Trump administration, he was chief technology officer for the United States and also acting undersecretary of defense for research and engineering. A long history of public service.
I would simply say, I read the plan. We’re studying it. We’re trying to understand better. I’m reminded of the Emerson quote, “Do the thing and you will have the power.” And I think when you think of the execution agenda that’s ahead of us, I do hope every success for this plan. And very interested in understanding the particulars. There’s no better person to have this conversation with Director Kratsios than Greg Allen. I believe they worked together in the past, but Greg is our senior researcher on AI and all things AI. And this is going to be a very interesting conversation. Director, I very much welcome your presence here. Thank you. (Applause.)
Gregory Allen: To be clear, they’re clapping for you, not for me. So thank you all. I’m Greg Allen. You heard Navin say that Michael and I had the privilege of working together. It’s more accurate to say that I was like an insect in the Department of Defense, and he was kind enough to take an interest in my work from time to time from his previous perch in the Office of Science and Technology Policy.
And I’m reminded of what former Secretary of Defense Ash Carter – the late Ash Carter said in his memoirs about the U.S. political appointee system. You know, he said about 25 percent of political appointees are just completely out of their depth and it was a mistake to put them in the job. Fifty percent sort of muddle through. And 25 percent really nail it. And I knew from the first meeting I had with Director Kratsios that he was in that best 25 percent who really nail it. And we are grateful and privileged that he decided to return to public service, because he did have attractive alternative options and yet he forsook those for the sake of the country.
And one of the benefits of his choosing to do that is what we’re going to talk about here today, the AI Action Plan, and the three AI executive orders, and the Trump administration’s AI agenda for this second Trump administration. I could try and talk about, you know, what’s in these plans and what’s important, but who cares what I think? I think we’d all rather hear it from Michael. So to just sort of level-set for the rest of the conversation – and we’re going to get into the weeds here – what is in these orders that we all should be paying attention to?
Michael Kratsios: Well, first off, thank you, Greg, for having me. Thank you, CSIS, for putting all this together.
You know, I was very excited and lucky to work with you in the first administration. And for those of you who don’t know, Greg played an extraordinarily integral part in kind of launching the first AI efforts at the Pentagon. And the things we talk about today, JAIC and CDAO and everything that’s happened there, it all started because of this guy here. So he’s a real patriot and made a huge impact on the way we think about AI and national security.
For me, you know, last week, I think, was a big week for the United States, for the president and for the future of artificial intelligence in this country. The journey, I think, to getting to the Action Plan actually started in the first Trump administration, in Trump 45. At that time AI was not on the cover of every newspaper. It was not something everyone was talking about. Most think tanks here in Washington didn’t have people like Greg Allen writing about AI or talking about it. It was just an issue that was kind of like out there and percolating.
And we have to give a lot of credit to President Trump, who signed the first executive order in the history of the United States on artificial intelligence in February of 2019, and he set the first national strategy and launched the American AI Initiative. And again, this was at a moment when ChatGPT didn’t exist, when most people, when they thought about AI, were not thinking about chatbots or anything like that. And it was a very prescient moment where we realized that the U.S. had to lead in this.
And the main pillar or thrust of that initial AI action plan – or, you know, sorry, national strategy – that we had then was all around research-and-development leadership and how we can turbocharge the research ecosystem so that the U.S. can be the home for the next great technological discoveries.
The president then went on to announce the doubling of AI spending. And then fast-forward a few years later and all of this great research and all of this great effort that we put together as a country to work on AI sort of came to bear with the launch of ChatGPT. And then we were off to the races,
You know, we spent – the country kind of endured four years during the Trump administration – during the Biden administration – where there was a real philosophical discussion happening in Washington about the future of AI regulation and, more importantly, a very intense conversation around the hypothetical risks associated with this technology.
And if we go back to that era and we think of the first order, the first hearings that were happening on the Hill, there was a lot of debate about how much should we regulate, how much should we not regulate it, and whether or not we as a country need to be really scared about this technology.
And, you know, in my sort of 30,000-foot estimation or summarization of how the previous administration thought about these issues, fear was what led almost all of their policy decisions. It was something that was captured and manifested in the Biden AI executive order. This was, I believe, the longest executive order that President Biden signed during his term. It set very weird, arbitrary limits around pre-deployment testing of AI models. And it created sort of this fear in the AI community that the government was going to come down and overly regulate this technology.
And when we came into office, we needed to really turn the page on that. We realized that the U.S. has to lead in this technology. Our national security and our economic security are really – you know, our success there is so contingent on our ability to – is so reliant on our ability to lead in AI. And we had to put together a plan in order to deliver on that.
So the president on day one signed an executive order rescinding the Biden executive order. And three days later, on January 23rd, he signed a new executive order directing me, the national security adviser and our senior adviser for AI, David Sacks, to come up with the new AI Action Plan.
So sorry for a little background, but here we are to the plan. And I think what I found most exciting about the last six months was the level of engagement and participation from the broader American community on what we should be doing and thinking. You know, I remember you’ve done a couple of podcasts, and whenever you describe OSTP, I always love the way you think about it. We’re sort of this, like, wonky policy office that not a lot of people have heard of, but we have great experts who do important things.
One of the things we do is we put out RFIs. We put out an RFI for the community to say, hey, you know, we’re doing this AI Action Plan and we want America to win in AI. What should we be doing? And I always thought, like, hey, you know, who even checks what an RFI from this office no one’s really heard of. It was, like, you know, what’s going to happen?
We got 10,000 responses. You know, we had an insane number of responses from all walks of life in the United States. We had Hollywood stars reacting and sending letters. We had, obviously, AI companies. We had, you know, big financial services companies. We had, you know, civil society so it really showed that there was an intense interest in participating in this project. So we ingested all those and ultimately the plan came out.
So to quickly summarize it, and I want to get in the conversation, is essentially there’s three primary pillars with the overarching theme that America has to win the AI race. As I said before, as a country we have to have the most dominant technological stack in the world and that’s critically important for our national economic security.
In order to do that we have to do three things. The first is we have to lead on innovation. We have to continue to be the home for the next great technological discoveries in AI and that means creating a regulatory environment that can allow these technologies to thrive in the United States, give certainty to our AI companies that they can continue to innovate, provide opportunities for small businesses and new startups to be able to innovate in AI without being scared of state level laws, for example, and also continue to invest in the very critical R&D associated with the next discoveries.
We have to apply that AI to all sorts of things. We’ve seen today AI is extraordinarily good. Our large language models are very good at coding, for example. There’s a lot more we can do in scientific discovery, in biologics, that we need to continue to work on.
The second pillar is all about infrastructure. We know that the key ingredient to drive in the AI revolution is going to be our ability to create the right amount of electricity and also create the datacenters that are going to be – need to be powered to do all of this critical inference and training. So there’s a lot of actions in the plan and the executive orders to talk about how we can unlock it.
And the third pillar is all about AI exports. One of the most important things we need to do is make sure that the world is running on the American AI stack. We have the best chips. We have the best clouds. We have the best models. We have the best applications. Everyone in the world should be using our technology and we should make it easy for the world to use it. We want everyone in the world who’s developing AI to be using American software.
So those are the three pillars. Lots of actions under them to do it and we think by doing those three things we can win the AI race.
Mr. Allen: Amazing. So we’re here at the Center for Strategic and International Studies so I do want to start off on the international part of the conversation and the actions in here.
But before I do that I just want to commend one thing about the action plan, which is as a guy who, you know, had strategy in his title when I was at DOD, the typical government strategy document is not a strategy document, and you have a private sector background so I know you appreciate what I’m talking about.
It’s a list of nice things that they wish would happen – (laughs) – and this is an action plan. It’s actually talking about things that we’re going to do, the things that we’re going – the way in which we’re going to pursue those goals, and even a willingness to accept tradeoffs when we’re going to do this and not that, and I think that’s just entirely commendable.
So on the international part, as you said it’s all about promoting AI exports and the companion to the AI action plan element of this is, of course, the executive order, which is promoting the export of the American AI technology stack.
So this is interesting because it’s not just about an approach to promoting exports. It’s actually an idea for combining different elements of the technology stack and making it a unified package that the government is supporting.
So what was your motivation for pursuing that as a strategy for increasing exports? What was the problem you were trying to solve? Because, just to, you know, put a hypothetical out there, there’s a ton of demand for American technology all around the world.
You could – you know, somebody could make the case that, like, why do American companies need help? Like, the line is out the door and around the block for American AI technology.
So what is it – what’s the problem that you were trying to solve in this executive order and with this part of the plan?
Mr. Kratsios: Yeah. On the first part on the action plan the language was extraordinarily deliberate, and when we were working on this executive order during transition there was this whole debate about do we do another AI strategy – is that how we turn the page on the disaster that was a Biden EO or not, and I think that the reality was no, we want actions. We want people to begin doing these as soon as possible.
We gave ourself a tight timeline. I know six months sounds like a long time but in government time it’s not a long time, and we really pushed hard to make sure we could get it out and then we’re excited about that.
The origin of the AI export idea, I think, comes back to even to Trump 45 when I spent far too much time as, essentially, the U.S. technology minister around the world talking to fellow ministers trying to convince them to rip and replace Huawei. And I had these very, very challenging conversations where, you know, the West had technology which we perceive to be of higher quality, especially in the early years of the Huawei wars, and they still didn’t want to buy it because the U.S. just was not able to create the environment and the packages necessary to export it out. And as we were, you know, thinking about this issue over the last four years, what I kept coming back to is, the issue that we face today is even more serious than the telecommunications problem that that generally the West faces in the Global South.
If most countries around the world are running on an AI stack that isn’t American, and potentially ones of an adversary, that’s a really, really big problem. What we have is – and why I think it’s so – it’s so acute is that, over time, the way that these essentially models will operate on a government level is all the government data that a government has is going to be ingested into models to provide citizen services. Whether it’s the way you pay your taxes, whether it’s your health care records, whether it’s small things like if you want to apply to, you know, get a permit to go to national park for a campsite – all of this stuff is going to be part of – part of the AI fabric. And it would be a huge problem if the model that is finetuned to generate these AI solutions isn’t from America.
So we have to get out in front of it. And why we’re at a better position today than we were with the Huawei issues of the late 2010s is because we actually have a U.S. alternative. And this time, we have an alternative is way better. We have the best stack. We have to always remember that. We have the best chips. We have the best models. We have the best applications. So we’re at a moment of extreme strength. And we need to be able to go out there. And the world wants to run AI. They want to be able to provide AI for their people and for their citizens. And the question is, like, how do we meet our customers where they are?
And you go back to the stack question. Why do we think of it as a stack? What I generally observed over the last three or four years traveling around the world trying to get more countries to use AI is, generally, a lot of countries are interested in having AI for their people. The specifics of what that means is not necessarily always there. And we have to fill in the blanks for them. We have to show them what the potential is for AI for their people, and their country, and their economies, and make it as easy as humanly possible for them to implement it.
An American who wants to, you know, build AI, you’re going to have to go to a different cloud vendor, and you’re going to have to do some tradeoffs. And maybe you like GCP, maybe you like AWS, maybe like Azure, you kind of play off each other. Then you think about the models. You go to, all your model vendors and you choose a different model. Then you think about all your application vendors. And for, you know, CIOs or CTOs in corporate America, like, that’s your job. Like, you do that, and, you know, you get people pitted against each other and get the best price.
If you’re out in the world as a technology minister and you want to provide tech for your – for your people, that’s not a calculus that, you know, you do every day. And we as America need to make it easy for lots of folks to understand what’s available and how they can actually deploy this technology in their country. So that’s kind of the thinking – the thinking around it.
Mr. Allen: This is really interesting. And you made the explicit comparison to Huawei. And I think one of the things that China, Inc, as an approach to selling their technology stack abroad, is they really do offer turnkey solutions and try to emphasize that aspect of it. And so even though every part of the stack America might be better, the fact that China was, in this in past period, you know, willing to offer the full stack, and America basically said if you put all of our pieces together we promise it’s way more beautiful and good. But you, the customer, have to navigate that complexity of putting all the pieces together. So is it fair to say that this approach is trying to make it easier for customers to navigate that complexity? So if they have the money to buy, we will make it simple to buy, and to buy American?
Mr. Kratsios: Yeah. And I think it’s twofold. It’s not – if you have the money to buy, we can make it simple because we have these packages. But more importantly, what the executive order does is it activates our development finance organizations within the USG to start prioritizing export of AI. And that’s the Export-Import Bank, the Development Finance Corporation, DFC. And for years these organizations have been sort of directed by Congress to work on this sort of technology race around the world. But it’s not necessarily in their DNA.
These development organizations, for many years, have been very good at doing, sort of, like, hard infrastructure-type projects. If you want to buy a plane, if you want, you know, financing for a port, like, they know how to do that. They’ve done those deals over and over and over again. But we need to start building into that – into their thought process that technology and the stack itself – from the chips and the compute all the way up to the algorithms – is what the U.S. needs to be exporting. And we need to find a way to create attractive financing packages through that.
Mr. Allen: So you’re going to be building institutional capacity within these institutions to focus on that.
Mr. Kratsios: Yes, absolutely.
Mr. Allen: That’s amazing. So the flip side of promoting American exports is also controlling exports that we don’t want to be going to the wrong places. And the action plan really came very clearly and said: We want to strengthen enforcement of AI compute export controls. We want to strengthen enforcement of semiconductor manufacturing equipment export controls. So I know that there is a needle that you’re trying to thread with all these strategies. But I can forgive, you know, those who are casual observers of what the Trump administration is doing. And they would say, hey, there’s areas where it seems like the Trump administration is relaxing export controls. For example, recently allowing exports of the Nvidia H20 chip to proceed. But then there’s these other areas where they’re strengthening export controls. So what is it you’re trying to accomplish? And why are the basket of decisions that you’ve made sort of threading the needle that you’re trying to thread?
Mr. Kratsios: Yeah. What we talk about in the report, and we generally believe is that the highest-end semiconductors need to continue to be export controlled and not allowed into China. And that’s important for our ability to maintain our leadership in this race. The restrictions that we put on some of the supply chain, sort of behind the semiconductor manufacturing, some of these lithography machines, in the first Trump administration were probably one of the strongest actions that we ever took, and most effective.
Mr. Allen: The highest return on investment export control in the whole story.
Mr. Kratsios: Absolutely. And those are still in place. And the action plan continues to contemplate whether there’s other items that need to be thought about there. So for us, I think that’s kind of the dichotomy we see, is to continue to sort of make sure that the highest-end chips are not making their way into China, and also think about supply chain.
The enforcement thing is really key. And I think we need to continue to do better as a country on being able to enforce this stuff. You can have the best export controls in the books, but if you’re not able to effectively enforce them because you’re resource constrained that’s a challenge. And I know Undersecretary Kessler, who runs BIS and also Secretary Lutnick has talked about this. This is something that the Hill talks about a lot. We have to find ways to provide the tools that BIS needs to do the enforcement activities necessary.
Mr. Allen: And I just want to commend you on this, because there are certain parts of this AI Action Plan that really do reflect the OSTP role in the interagency process and the NSC role in the interagency process. There’s actions in this plan that could not have been taken at the department level. On the point about export control enforcement, something that I’ve been, you know, howling into the void about for years is connecting the intelligence community to the task of export control enforcement. Which is a role that they had during the Cold War, and then suspiciously forgot about, right, in the 1990s. And so, you know, using the White House to actually drive that kind of connection between those various agencies, I think that’s so long overdue. And I want to commend you on that.
The second thing that I was very interested to see was, on the semiconductor manufacturing equipment side, a focus on components. So not just the entire finished machines, but the key components that go into the machines. Could you just elaborate on why you think that’s important, what you’re trying to accomplish with that focus?
Mr. Kratsios: Yeah. So a lot of components relating to these machines are of us origin. And there’s certain export control rules you can use that, if there are certain components of these machines that are U.S., then you could sort of essentially export the whole thing. And I think that opens up a lot more opportunity and thinking strategically about the types of machines and capabilities that you want to limit the Chinese access to. I think, you know, again, we are – we are in a race. And what’s important is that we’re not necessarily cutting everything off from China. I think what we’re carefully thinking about is, as we’re doing these tradeoffs, what do we think is most important? And it continues to kind of be this balancing act. I think you saw it a bit with the H20s. But we continue, again, to kind of export the highest-end things.
Mr. Allen: Great. So H20s are chips that are going to be able to be sold legally into China under the revised policy. There’s also the chips that are being illegally sold to China via smuggling. And the Trump administration rescinded the Biden administration’s AI diffusion rule. But, at least as I understand the messaging that’s coming out of the department, they hated a lot of what was in the diffusion rule, such as country-level caps, but there was a focus on making sure that customers had appropriate protections in place to make sure that American intellectual property was protected, to make sure that chips that were sold weren’t smuggling.
And so the diffusion rule was rescinded, but my understanding is there’s going to be forthcoming guidance about what the kind of protections that need to be in place for large-scale chip transactions to happen. And, you know, President Trump, in his speech on July 23rd said that we’re going to maintain necessary protections for national security. So can you talk a little bit about where the administration is in developing what those protections are going to be? If we’re going to sell these chips, how are we going to have confidence that that customer is not a Chinese front company? Because the Financial Times recently reported that in this year alone there’s over a billion dollars of Nvidia chip exports. And Undersecretary Kessler has talked about this being a problem, smuggled chips to China. So where are you in this story?
Mr. Kratsios: Yeah. So I think, just to clarify on the H20, it’s not a free-for-all sale. So any sale that Nvidia wants to make to China is one that’s going to require an export license. So BIS has said publicly that they will be evaluating each of those license applications and weighing the costs and benefits.
Mr. Allen: And the traditional restrictions of no military actors, no intelligence actors, those kinds of things, will apply for those license decisions.
Mr. Kratsios: You can imagine –
Mr. Allen: Yeah, yeah.
Mr. Kratsios: – those would be contemplated.
And then the security considerations, when doing additional exports of chips around the world, I think they fall in a couple of buckets. I think the first thing that you think a lot about is the physical diffusion of the chips themselves. So, you know, I think this one probably gets more airtime than it probably should. For those of you who have ever been to a large-scale AI datacenter, you know, these aren’t chips I, like, have in my pocket. We’re not talking about, like, a bag of diamonds or something. This is like a massive rack that’s, like, tons in weight. You’re not going to just, like, put it on a forklift and, you know, back into a truck or something.
So the physical diversion of the chips themselves is something that you probably could very easily and conceivably monitor and track, which is physically just counting. Are they still there or not?
The second piece of the puzzle is around the use of these ultimate datacenters to do training runs or have – or be accessed by actors you don’t want to be. And this is where sometimes you see PRC cutouts or other companies that sort of, like, spring up, but then want to do training runs on these machines.
The thing we have to remember, especially in – you know, what are we most worried about? Are we most worried about sort of small-scale sort of inference runs for some Chinese app? Probably not. What you’re most worried about is large-scale runs that are for training, sophisticated models that you would be worried about. And those are actually pretty easy to flag. Most people aren’t doing, you know, multibillion-dollar training runs on these training clusters.
So I think between being able to have stringent and strong KYC requirements imposed on people who are operating the datacenters, married with monitoring for the scale and scope of the actual training runs, I think you’re able to kind of piece together and identify actors.
Mr. Allen: Great. Keeping with the international theme but changing gears slightly, I want to talk about regulation and governance. So Vice President Vance, in his speech at the Paris AI Action Summit back in February, was pretty harshly critical of the EU AI Act and EU technology regulatory efforts that affect American companies more broadly.
You know, the actions that the administration is taking are framed explicitly as a light-touch regulatory approach and really don’t want to, you know, throw the baby out with the bathwater as we try and ensure safety and efficacy in AI. But, at the same time, we’ve seen that some American companies have already announced that for their large-language models, for their general-purpose AI systems, they are going to abide by the EU AI Act code of practice.
So where is the administration on its thinking about how to engage with these international AI regulatory frameworks? And does that affect anything that you’re trying to accomplish in the AI Action Plan?
Mr. Kratsios: Well, the EU stuff is particularly disappointing. So the – it’s funny. If you ever look on sort of the EU Commission’s Twitter, they have a lot of pride in leading the world in tech regulations, which I find amusing. And they initiated all of this effort around the EU AI Act many years ago, before sort of large-language models are what they are today, the way we can evaluate them, the way we can think about them. They still use very sort of antiquated, sort of high-risk, low-risk buckets, all sorts of stuff. And their inability to actually implement the act itself has led to this new phase of thinking around this sort of, like, code of conduct that they want people to agree to.
And they essentially have our American companies, like, over the barrel of a gun, saying if you don’t sign up for this code of practice and this, like, much more stringent, you know, thing we haven’t really defined is going to, like, come after you, which is the EU AI Act. So I think it’s very unfortunate that they’re using these tactics. And they have a big market, so our companies want to be able to access those markets. And it’s very unfortunate to kind of see it play out.
I think what we try to do in the United States is being able to show that there is an alternative to that sort of precautionary principle-driven mode of AI regulation. And that’s what we’re building here in the United States.
You know, the key thing that – I think the very key fundamental distinction between the way that the Europeans view AI regulation and we do as administration is the Europeans continue to think of it as a singular AI regulation that is sort of horizontal and crosses everything. It is just, like, AI rules.
Mr. Allen: Computer rules. Yeah, yeah.
Mr. Kratsios: Computer rules. Yeah, an even better way to put it.
And I think the U.S. thinks about it much more as the way that you should think about AI regulation it needs to be use case sector specific, risk based, and risk based in nature. So if you think about it, the regulators at FDA who are thinking about AI-powered medical diagnostics are the ones who should be doing regulations on AI-powered medical diagnostics, not some group somewhere else in government that is doing AI regs.
And the people who are doing drones or the people who are doing autonomous vehicles or the people who are doing financial services, like, they’re the experts in regulating that industry and you can imagine this is just, like, another technology that has entered into their domain.
So Department of Transportation was regulating vehicle safety and NHTSA was regulating vehicle safety before there were even a bunch of electronics in cars, and now there are and it just grows over time. And I think, you know, yes, you could argue, you know, do they have enough capacity or understanding of this new technology to keep pace.
My argument is, like, yes, they do and they will over time get better at it, and we have to push them as experts and as industry leaders to educate them on kind of where the technology is. But that’s how you get good regulation.
Mr. Allen: Yeah. Keeping with the topic of regulation, there was a proposal that was originally in one of the drafts of the Big Beautiful Bill for a 10-year moratorium on state-level AI regulation. It was ultimately not included in the final version that was passed.
But there are some movements in the action plan that deal with the issue of state-level AI regulation. So can you talk a little bit about what the mechanisms you’re using are and what you’re trying to achieve?
Mr. Kratsios: So I think the president spoke about this directly in the speech where I think he – you know, he made the point that, look, having a patchwork of regulations across the entire country just doesn’t make sense. It is not pro-innovation.
If you’re a startup and you’re trying to create an AI use case and California has one law and Colorado has another one, you know, and you’re trying to figure out how you comply with both, you know, that’s really hard.
You know, if you’re a large big-tech company with a multi-trillion-dollar market cap you probably have enough lawyers to figure it out but the smaller guys probably can’t and it’s not fair to them.
And, generally, that’s where the administration position is. It’s probably smarter to try to have, if we do have AI regs they should be uniform across the country and essentially try to preempt states from being able to do that.
You know, how that plays out and how we’re able to find consensus with the Hill remains to be determined and we look forward to working with the Hill on all these issues. But I think the president was very clear on what the position of the administration is that, you know, generally, for being pro innovation you got to – you have to avoid a patchwork of state regs.
Mr. Allen: Yeah. And there’s some movements here to try and persuade the states to take a more light touch approach such as instructing the evaluators of AI-related grants at the federal level to evaluate the regulatory environment when they’re considering whether or not, you know, any given state should be receiving a grant.
That’s a very interesting, you know, kind of lever. There’s also one looking into FCC-related legal authorities. Is it fair to say that, you know, you’re looking to max out what you can do on the executive side in terms of shaping the state-level AI regulation environment?
Mr. Kratsios: Yeah. The two you mentioned are pretty niche, right? So it’s, like, you can see that there’s not as much as you think you could do alone from the executive branch.
Mr. Allen: Right.
Mr. Kratsios: I think the ultimate solution does require Congress and does require some sort of bipartisan action. So that’s what we’re going to try to explore, and there’s lots of thoughts about this both on our side of the aisle and on the on the Dem side.
So, hopefully, we can work together on a solution.
Mr. Allen: Yeah. So as President Trump said in his speech, we want to have rules but they have to be smart. We’ve got to get rid of some of the regulation but we want good regulation.
You know, you talked about the intersection of, you know, AI and the FDA for medical devices, of course. One area where the AI action plan actually goes into some detail is the intersection of AI and CBRN risk – CBRN risk, so chemical, biological, nuclear, radiological, and also has some actions being taken around the intersection of AI and biosecurity. This is something that a lot of folks in D.C. have expressed concern about.
I should plug as a classic think-tank schlub I have a paper coming out on this topic next week on the intersection of AI and biosecurity. But I wanted to ask you, you know, what do you see as the risks that the administration is interested in on these issues, and what are the mechanisms where you’re trying to mitigate them?
Mr. Kratsios: Yeah. Well, first I think I’d just – you know, I think it’s important that the federal government has a very important responsibility to be evaluating these potential risks. And just like we as a – as a government have a responsibility to the American people to think about other risks which could face – which could, you know, hurt the homeland, we need to be worrying about these things and making sure that we have the right teams in place, have the right experts at places like the Department of Energy and the Department of Defense and in the intelligence community to think through how to do evaluations on these CBRN risks.
I do think that – if you zoom out a second, I think the one thing that I just think is important to message is that we cannot allow the evaluation of these risks to be the dominant driving force of our artificial intelligence strategy and policy. And unfortunately, the previous administration, this was their guiding light. They were manically obsessed with existential risk, and they believed that the government’s sole purpose in life was to minimize that and everything else didn’t matter. And that manifested itself on its complete assault of the AI industry through the Biden executive order.
So what we do is very thoughtfully include how we should be thinking and considering this existential risk within the larger construct of American leadership in AI. And that’s why you see it as one important point within the action plan, but it’s not front and center. The action plan doesn’t exist to eliminate existential risk; that is just one thing we need to be working on in the course of a 30-page document that covers 90 other things.
But I do think it’s critically important. And why I think that – and why I do think it’s important, particularly for the federal government, is for those of you who work in the – in the evaluation space, you know, the details of how a model eval happens I think are actually, like, kind of interesting. Most people don’t really think about it. They just look at the output and they’re like, oh, this has some, like, biological risk. Well, like, how in the world did they figure out it has bio risk? So you literally have to get biological experts who, like, know this particular domain extraordinarily well, who understand what sort of – what all the risks could possibly be, and then they themselves have to, like, sit in front of a model, and ask it questions, and try to get it to say things that are dangerous or in some ways could be – could lead to something dangerous. This is a very manual, individually sort of, like, specific-to-human problem that you have to, like, work through.
And why the government is well-positioned to help here is we have experts in these space(s). There’s people who work on CBRN risks at our national labs and places like that. So I think we’re very well-equipped to be able to supply the subject matter experts to run these evals and create the testing harnesses within places like DOE to do this stuff.
Mr. Allen: Which is great. And I think in addition to the work of DOE, which is vitally important in these kind of areas, I do feel like the AI Action Plan is the clearest statement from the administration to date on what’s the future of the Center for AI Standards and Innovation.
So this is a body where in D.C. politics we’ve seen everything from this is the most important agency in the world to we need to get rid of this yesterday, and I feel like the action plan has finally said what the Trump administration’s view of CAISI’s future. So could you elaborate on what this institution is going to be doing and why you think it’s important?
Mr. Kratsios: Yes. Yeah. So CAISI was initially set up by the previous administration as the AI Safety Institute, and this was done at the same time that the Brits created their safety institute. And again, this was an era in the AI policy sort of story where everyone was obsessed with the government thinking about safety. I don’t know, no one could really define what safety was. For some people it was, you know, DEI-related issues. For other people it was, like, CBRN. For other – you know, who knows what it could be? And the Biden administration sort of created this body within NIST, which is the – which is a standards agency, to attempt to do evaluations around some of these DEI and other – and other related risks.
And I think we have tried to turn the page on that. Secretary Lutnick sort of renamed the institute into a standards institute. And I think my core thinking about it has always been, you know, it is in a standards agency. The purpose of NIST is to promulgate standards. And if we go back to the story I was telling about how you do a model eval, one of the biggest challenges that currently faces the AI industry is understanding how to measure and evaluate models is still an open scientific question. Currently, sort of the state of the art is literally having a biologist ask questions to the model and see what comes back. Like, that provides you some answer. But is that scalable? Is that something you could standardize across the entire world and create some sort of, like, coherent, multi-country understanding of what the best practices are for doing evaluations? All those are open metrology questions.
And if you think about what role does NIST have, or what role does CAISI have, it’s all about understanding the measurement science of models. And that is what we’re excited for NIST to be working on, and to be able to share with the world how you actually can measure a model. And once you do that, that’s really valuable to industry. If you’re in financial services and you want to deploy a model and make sure that, you know, client or customer data isn’t being siphoned off by the model or whatever, NIST standards around how you do a model eval could be super valuable in you being comfortable in that decision.
Mr. Allen: So a moment ago we were talking about existential risks and AI, which is something you hear a lot about in D.C. It’s something you hear a lot about in Silicon Valley. But anytime I go anywhere in this country and I talk to folks who don’t live and breathe, you know, AI policy, the issue that they want to talk about is the intersection of AI and labor. What’s this mean for my job? What’s this mean for my family? What’s this mean for the future of the economy? And this plan does some pretty interesting stuff to try and untangle, you know, what the government needs to be doing at the intersection of AI and labor. So could you please elaborate on the muscle movements in this policy?
Mr. Kratsios: Totally. So I think the first thing that I have come to appreciate over the last six months working on this is the intense amount of new labor that we need to be able to build the infrastructure necessary to power the AI economy. A big chunk of the infrastructure pillar of the action plan talks about how the Department of Labor, NSF, Department of Education, many others are going to be working together to reskill, retrain, and prepare more Americans to be able to support the infrastructure build out necessary for the AI boom.
I had the CEO of the company that’s helping build out the big Stargate project in Texas. And I asked him, kind of, what is the biggest challenge that you’re facing? And he said, I can’t get enough electricians. He said he has to fly in something like electricians from over 40 states, flying into Texas to build this project. And that’s Stargate. That was the biggest project that has been announced. And that’s not anymore. And that’s a problem that we’re seeing throughout the U.S. So the president is committed to making sure that we can retrain and prepare Americans for these very important, high-paying jobs that will support the infrastructure build out.
The second piece is around, you know, how Americans will be using artificial intelligence in the jobs that that already exist. And SBA Administrator Loeffler gave a great talk at our AI summit last Wednesday where she talked a lot about the data that her agency is seeing on how small businesses are embracing artificial intelligence. The biggest challenge that small businesses have in the U.S. is being able to find reliable employee – like, employees. And recruiting is really hard. And for them to scale and grow their businesses, AI provides incredible leverage to the people that are in these small businesses to allow them – allow them to grow.
And I would say the third thing around this issue is the effort that the president did to sign the executive order on K-12 AI education about three months ago. And why I think this is extraordinarily significant, if you think about it, we were in the middle of trying to write essentially a national strategy for the United States on AI, that was coming out in June. Yet, the president believed that we had to get an AI executive order out on K-12 education, and how critical it was to make sure that America’s students are prepared for the jobs of the future, that they are understanding how to wield and use these tools that are going to fundamentally transform the way they enter the American workforce. Whether you’re a doctor, whether you’re a lawyer, whatever you’re going to be doing, you will be using this technology to do your job. And we, as a government, have a responsibility to help America’s youth be able to prepare for the future.
Mr. Allen: Great. And you started your answer by talking about the power and electricity and datacenter build out, which is something that’s a huge focus of this executive order that’s focused on that area, but also the AI Action Plan as well. So when I think about, you know, the electrical generate – actually, let me – let me take a step back. The action plan actually opens up by talking about the space race and how we’re in a race for AI – leadership in AI, and how it really matters, and it’s strategically important.
One of the kindnesses of the space race was it lent itself to easily definable metrics. Getting to space. Putting a human in space. You know, doing an extra-vehicular spacewalk. Getting on the moon, et cetera. So as you think about this datacenter buildout, we’re talking about gigawatt datacenters. We’re talking about adding tens of new gigawatts to the grid, maybe hundreds of new gigawatts to the grid. What are the metrics that you’re looking at? Because there’s so many different actions that this plan takes, that this executive order takes, to make it easier for companies to build new electricity, to build new transformers, to build new datacenters in America. How will you know if you’re doing enough? How will you know if America is moving fast enough?
Mr. Kratsios: To me it’s making sure that, in tracking the regulations that we’ve been able to pull down or modify, to accelerate the velocity of permitting and building out of those facilities. So I will probably be tracking, over the long term, time it takes from initial application to approval and sort of shovels in the ground starting to build.
Mr. Allen: So if you think about, like, warp speed, which says, you know, it takes this long to normally develop a vaccine, we developed a vaccine in this period, you want to say it normally takes this long to get the permits for a new-build power plant –
Mr. Kratsios: Yeah.
Mr. Allen: – we want it to take – that’s what you’re going to be –
Mr. Kratsios: And that’s kind of what our hope is. I mean, I think one of the big steps that was discussed in the Action Plan is around creating – this is getting super-wonky, but categorical exclusions for AI-related buildouts under NIPA. So many of you know NIPA is like a big sort of, you know, elephant that you have to sort of deal with, I guess, when you’re trying to build in the United States.
And there are interesting ways around it where, if we can get categorical exclusions from NIPA for these large datacenters, that’s one example of accelerated. But there’s lots in there, both for how you build out the datacenters themselves, but also the power generation and transmission.
Mr. Allen: Yeah. So there’s a lot of bottlenecks in the AI buildout, because these companies – President Trump mentioned a few of the tech giants who just collectively are looking to invest $320 billion. Just to level-set folks in the audience, NASA is, like, 20 (billion dollars) to $25 billion. So we’re talking about more than 10 NASAs per year of data-center power and electrical infrastructure buildout.
I think, obviously, President Trump has a preference for that being built in America. And, you know, the Biden administration, with the diffusion rule, they tried to say we’re going to force you to build it in America by prohibiting the sale of chips elsewhere and giving you quotas for how much you had to build in America.
But, at least in the short term, you do see companies like Nvidia and its chip suppliers, TSMC, are kind of supply-constrained. You know, they can only make so many chips per month, per year. And so as these big deals go forward, some of the deals that President Trump was personally involved in negotiating with Gulf states, for example, how is the administration thinking about what’s the right approach to promoting American exports, on the one hand, which is attractive, but also promoting infrastructure spend in America, which is also attractive? How are you thinking through that tradeoff? What are you trying to navigate?
Mr. Kratsios: Yeah, I think the challenge with the Biden administration approach was they were always trying to fight the battle of, like, the moment right now and didn’t think at all about how the future would change over time.
I’ll give you two examples of that, and it relates to this. The first example was their inclusion of a compute limit of 10 to the 26th for models that needed to be essentially, you know, ones that – the risk associated with them needed to be disclosed to the Commerce Department. You know –
Mr. Allen: Yeah. A bunch of regulatory hooks kicked in when you have that power level.
Mr. Kratsios: A bunch of regulatory hooks. Right now over 33 models have been trained over that limit. That clear – that limit makes no sense. That was just, like, happened to be some number pulled out of thin air by some think tank that got sort of absorbed into the Biden ecosystem. And so that number makes no sense. So that’s not a future-proof problem.
I think, on the chips issue, I’d push back a little bit that, like, right now we’re not seeing chip constraint. Like, years ago, a few years ago, we were. And the big fear was, like, oh, you know, if we open up sales, then all these foreign buyers are just going to buy them and, like, pay out the nose for them, and the U.S. won’t have any chips.
That has not proven true. When I talk to any of the hyper-scalers here in the United States, they have the chips that they need. There’s more than enough chips for what we want to accomplish over the next couple of years. So I think we’ve already passed that problem.
So I think I’m not worried about our ability as a country to be able to have the chips that we do need. It is obviously a priority for this administration to make sure that American companies have the chips that they need to build American datacenters here in America. And we will always make sure that that’s a priority. And if we ever see that slipping for whatever reason, or a problem, we’re going to act on it. But generally speaking, that hypothetical sort of, like, chip constraint that we were seeing before is not – is not really true.
Mr. Allen: The bottlenecks you’re more focused on are electricians, permitting, those kind of bottlenecks.
Mr. Kratsios: Yes.
Mr. Allen: Yeah.
Mr. Kratsios: Yes. Yeah.
Mr. Allen: And it’s fair to say that wherever you identify bottlenecks to the buildout of this infrastructure, you want to use whatever tools you have to go after them.
Mr. Kratsios: Yes. And if anyone in the audience knows of a bottleneck or has run into one, just let me know. We’re ready to tackle it.
Mr. Allen: So I think you’ve described this package of executive orders in one of your earlier speeches as, you know, really making the most of the executive authorities available to you. But you started by saying that, you know, there are – there are some areas where you do need Congress’ help.
Mr. Kratsios: Yeah.
Mr. Allen: So could you just talk about, you know, to the extent that you are ready to talk publicly about the Trump administration’s AI legislative agenda, whether that’s in appropriations – because some of these ideas that are, you know, listed as an explore-type action, if you go forward with them, presumably would require some kind of appropriation. Other ones might be, you know, exercising authorities where additional authorities would be helpful. So what are some of the legislative priorities, to the extent that you’re ready to share at this stage?
Mr. Kratsios: You know, I don’t think we necessarily have any priorities, but I think there’s a category of work that, obviously, the Hill needs to be involved in. Anything related to preemption is something that’s going to be mostly in their court instead of ours, and we look forward to working with them to kind of think through some of those preemption issues.
Obviously, the use of – the fair-use issue of copyrighted materials in the training of large language models was something that the president brought up in the speech. That’s not – there’s not a lot the executive branch can do on that; understanding and interpreting what fair use is is now sitting in the courts. But you know, that’s obviously something that if the Hill wants to think about, you know, that’s an area that they – you know, although quite controversial, is an area that they could – they could potentially try to tackle.
I think if – you know with the reforms that we’ve made to CAISI, there is, obviously, an opportunity for the Hill to think about how to legislate on the standards institute and give it sort of statutory cover for some of the actions that we want to be doing long term.
And to me, I continue to always think about R&D funding and the way that we can prioritize AI-related – AI-related funding across NSF and lots of other agencies.
Mr. Allen: Well, Director Michael Kratsios, we’ve been so fortunate that you were willing to take time out of what I’m sure is an incredibly busy few weeks, not that you really have any calm weeks this year, but taking the time to spend it with – here with us at CSIS. And I want to congratulate you and congratulate the administration on this package of AI policies, which not only give us at CSIS a lot to think about and do but I think are really an important move for this country. So thank you and congratulations.
And please join me in thanking – (applause) –
Mr. Kratsios: Thanks for having me. (Applause.)
(END.)