AI Diffusion Framework – Emergency Podcast

Available Downloads
This transcript is from a CSIS podcast published on January 13, 2025. Listen to the podcast here.
Intro: Welcome to the AI Policy Podcast, a podcast by the Wadhwani AI Center at CSIS. I'm Gregory C. Allen. And I'm Andrew Schwartz. Join us as we dive into the world of AI policy, while we will discuss the implications of this transformative technology and what governments around the world are doing about it.
Andrew Schwartz: This is a special crossover episode of the AI Policy Podcast and the Truth of the Matter. My colleague Gregory C. Allen, who is director of the Wadhwani AI Center, is here with me to talk about some brand-new rules that came out Monday, January 13th, 2025. Biden administration released its interim final rule on artificial intelligence diffusion. Greg, before we get into the details, what is the rationale behind this export control framework that the administration put out today?
Gregory C. Allen: There's so much to get into. This rule is a big one with a lot of different moving pieces, but I would start just by saying I think this rule is the biggest thing that the Biden administration has done on AI and geopolitics since the original October 2022 export controls, which were a bombshell. So, the Biden administration is going out swinging and this one is a big deal. So, there's a ton of different moving pieces in this regulation. As I said, I think the overarching hypothesis behind this is number one, a lot has changed in the underlying reality of AI technology, literally just in the past 60 days. So, for folks who went away on their holiday vacation and weren't paying close attention to the AI news the way I do.
Mr. Schwartz: Yeah, when I was on the beach, I was not thinking about AI. Just to be fair.
Mr. Allen: Yes, that is. You are right and everyone's right. But because of that, what you missed is that there's been this real change in what's going on in the minds of the leading AI developers around the world in terms of what is the future trajectory of continued AI progress. You probably remember that this fall, there was a lot of people who were talking about, oh, is AI hitting a wall? Is it hitting some kind of plateau where all the different competitors in AI had their models reached the ceiling of performance and then couldn't kind of break through it? There was a lot of different hypotheses for why that might be the case. But what's really important is that in December, OpenAI announced results of their o3 tests on the ARC AGI benchmark the artificial general intelligence test developed by Francoise Sole, and it literally smashed the previous performance results on this benchmark.
And again, we're getting into the weeds here, but I do think this is important to understand as context for the rule. Sole who is one of the top 500 AI researchers on planet Earth, has been a leader in this field for a really long time, but has also been something of an AI skeptic. He's basically one of the folks who every some kind of interesting bit of AI progress comes out. He says, actually, this is interesting, but it's specific to that domain, and we actually haven't moved the needle towards human level, artificial intelligence general intelligence that can be flexibly applied to a really diverse set of tasks. So, Sole had developed in 2019 this benchmark, the ARC test that was designed to assess AI systems specifically on this attribute of like can they reason through and develop novel solutions to problems that are not based on the history in their training dataset. Well, that's been out since 2019. The best AI models in the world scored a zero in 2019 and as late as November 2024, they were scoring like 5% correct on the questions. So, what's interesting is this benchmark test is something that a toddler could solve, but it specifically targeted the weaknesses of all the AI systems that we had and revealed those kind of weaknesses. And so it went from like 0% to 5% in November of 2024.
Mr. Schwartz: So our AI is just not that smart yet.
Mr. Allen: Yes. Specifically on this general intelligence kind of an assessment. Well, in December, OpenAI announced that their o3 model had scored like an 85%. They really had had this massive research breakthrough.
Mr. Schwartz: So from 5% to 85% in less than a month.
Mr. Allen: Yes. And Sole, who has always been a skeptic, his blog post about this I think is just worth quoting from. He said, o3 fixes the fundamental limitation of the LLM paradigm, the inability to recombine knowledge at test time. And it does so via a form of LLM guided natural language program search. This is not just incremental progress, it is new territory and it demands serious scientific attention. This is a surprising and important step function increase in AI capabilities. So that's what he's saying. But if you go and talk, like I have, to some of the leaders of the companies in ai, I was just at the Ashby workshops, which gathered a lot of the leading lights in AI technology and AI policy last week. A lot of people were talking about this result because it's not just that o3 is a really impressive model, o3 is a paradigm shift in the approach to building AI models and rather plateauing and hitting some kind of performance ceiling. It basically says there is a clear runway ahead of us for continued massive performance improvements. o3 is kind of like a toddler, but in this sort of new paradigm of AI technology approaches and it's going to grow up and what that means, and Sam Altman, the CEO of OpenAI has said this explicitly. He thinks that we are going to reach artificial general intelligence during the Trump administration like we are a handful of years away from a transformative moment when AI really is as smart as a person, really is as flexible in terms of performing economic tasks as a human worker and still has all the sort of scalability that we can achieve with AI systems. And so now I want to go to this op-ed that Dario, who's the CEO of Anthropic, another leading frontier AI research firm. He authored an op-ed with Matt Pottinger.
Mr. Schwartz: Yeah, this was just a couple days ago.
Mr. Allen: Exactly.
Mr. Schwartz: In the Wall Street Journal.
Mr. Allen: Yes, in the Wall Street Journal. This was the deputy national security advisor during the first Trump administration, and they wrote, the nations that are the first to build powerful AI systems will gain a strategic advantage over its development, are shared security, prosperity and freedoms hang in the balance.
Mr. Schwartz: So this is coming into focus here.
Mr. Allen: Oh yeah. AI will likely become the most powerful and strategic technology in history. By 2027, AI developed by Frontier Labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. This is a country of geniuses contained in a data center.
Mr. Schwartz: This is really scary. I think.
Mr. Allen:
Yes, this is the big breakthrough everybody sort of sees. Literally, if we just keep doing what we're already doing, we see the runway to this sort of transformative technological moment. And what they're saying is it matters a lot that we get there before China.
And so this policy, which is now the AI diffusion rule, it came out today, January 13th. I view this policy as like the break glass in case of emergency tool that the Biden administration has now reached for. And we should get into what all the mechanisms of this policy are. But I think that framing is really important. They think that we're only a handful of years away from this transformative moment in AI technology. And it's worth taking what I think are admittedly some pretty extreme measures to make sure that goes the right way for national security.
Mr. Schwartz: Alright, Greg, the first question I have, and I have many, is if this is true that we're at this breakthrough moment and Biden administration is putting out rules to make sure that our adversaries in the world, China, Russia, et cetera, don't get this technology before we do. At the same time leaders in Silicon Valley are saying we don't want the Biden administration to control our development any further. A lot of them have endorsed President-elect Trump because of that.
So what’s in this equation here?
Mr. Allen: So I think when you think about the industry pushback, and there's been some reporting about industry pushback over the past month, I think it's worth pointing out that number one, a lot of the industry pushback was not about the policy, it was about the process. What they're basically saying is like, Hey, Biden administration, you're rushing this thing out the door. You've given us 10 days to comment on it. It's an incredibly complicated policy. We need months to digest this and even come up with an opinion. That's part of the complaints about the process. And you've seen this come out of the Semiconductor Industry Association, for example. NVIDIA, which is the leading AI chip giant. They of course hate this policy. They hated it before and they hated it.
Mr. Schwartz: Now they want to be able to sell their stuff wherever.
Mr. Allen: They want to be able to sell their stuff wherever. I mean, it's worth pointing out that NVIDIA has continued to oppose even the just let us sell straight to China.
Mr. Schwartz: Part of the rules. So cuts way into their business.
Mr. Allen: Not necessarily. Actually, what's really interesting is that NVIDIA is completely sold out. So every chip that TSMC, the Taiwanese chip manufacturing giant can make, NVIDIA has more than one customer who would love to buy that chip.
Mr. Schwartz: There's a line around the corner.
Mr. Allen: Exactly. So NVIDIA's revenue is going to skyrocket with or without this policy. And I think you got to contrast what NVIDIA tells the news media versus what NVIDIA tells their investor relations, you know, the bank analysts who cover their stock. They have not said that this policy is going to lead to $1 of decreased revenue. And when you tell stuff to the security and exchange commissions, that's under oath. So there's an incentive. But my point is, so what this policy is designed to do is not to decrease NVIDIA's sales, but to shift NVIDIA's sales. This giant global AI infrastructure buildout. This policy wants to make sure that the bulk of that buildout takes place in the United States and takes place in countries that the United States can trust.
Mr. Schwartz: Right, so sell to Canada, New Zealand, no restrictions, Ireland, those places, no restrictions, but there were some surprising countries that are allied with us that didn't make this list, including Israel.
Mr. Allen: Yeah. So I think it's worth now getting into the sort of meat of the policy and the mechanisms by which it works effectively. It divides the world into three different country groups. So the first, which is country group one is the United States and 18 other countries. Now you might notice that the United States has more than 18 allies. The United States is treaty bound to defend a lot more than just 18 countries. So how do you get on this list?
Well, in my conversations with government officials, they say that it's not just that you're a close United States ally, it's that you're a close United States ally with a history of trusted cooperation on national security sensitive technologies and a demonstrated track record of being able to limit smuggling in your country.
So if you think about a country like Singapore or a country like Turkey, these are US allies, but Turkey has been a massive source of smuggling to Russia. Singapore has been a massive source of smuggling to China. You mentioned Israel. Israel has had a history of a lot of smuggling kind of activity going through Israeli territory. And so it's not just that you're a trusted and close US ally, it's that you're a trusted and close US ally who is aligned with the United States in terms of technology competition with China and who we have confidence can do what it takes to ensure that the chips that are sold to your country aren't actually smuggled into China.
Mr. Schwartz: I see.
Mr. Allen: So this policy is a big attempt to crack down on smuggling of AI chips to China and is willing to take pretty extraordinary to do that. So if you're in country group one, which you know, is the United States and 18 partners, it includes Canada, the UK, Germany, Japan, Australia, et cetera, basically no restrictions. You can buy as many chips as you want.
Then there's country group three, which is countries for which the United States has some kind of preexisting arms embargo. This is like the D five country group list includes China, includes Russia, includes Iran, et cetera, et cetera, et cetera. You're not buying any chips. We don't want you to have advanced AI capabilities full stop. And then the rest of the world is in country group two, which is this middle group of countries and they can buy a lot of chips still. So for example, they're allowed to buy 50,000.
The way the policy does this is in terms of total processing power, but it's more easy to think about it in terms of chip equivalents to some of the leading chips. So one of the best chips that's on the market right now is the NVIDIA H100 chip. And a country can buy 50,000 H100 equivalents of AI computing capability if they're in group two. So that's a lot of chips. If you think about the data center that Elon Musk built as part of XAI in Tennessee recently, that's a world leading facility and it's got a hundred thousand H one hundreds in it. So 50,000 is not nothing. And then if a country is willing to sign a memorandum of understanding with the United States, basically saying we're going to decouple from China at least an AI technology, not decoupling in everything, but decoupling an AI technology from China, they can buy an additional 50,000, so up to 100,000 H100 equivalents if they sign that memorandum of understanding with the United States.
I think one way to think about this is most folks will remember there is that big deal between Microsoft and G42, between the United States and the UAE. And I think the way to think about this policy is it takes the logic underpinning that specific transaction, that specific agreement, and applies it to the entire world because we successfully persuaded, and UAE diplomats have said this. In fact, they've told me this, that they are willing to decouple from China decouple in AI specifically because they view US technology as so attractive and so superior. So this is like a carrot, a diplomatic carrot that we're offering to the rest of the world: Choose America, and if you choose America, we will bring you into this attractive AI future that we are building. So that's the countrywide quotas. Now it gets even more complicated if you can believe it.
There's also what are called universal verified end users. So these are like companies and then there's national verified end users. So universal verified end user has to be headquartered in a group one country. So it has to be headquartered in the United States or headquartered in Japan or whatever. And then they have to sign an agreement with the United States government that they're going to have all of cybersecurity protections and all kinds of physical security protections at the facilities where these chips are going to be installed and operated. That's basically designed to minimize the risk of smuggling. And it's also designed to minimize the risk of critical intellectual property exfiltration by basically Chinese espionage. So if you are, for example, in Kenya, you could buy 50,000 chips from NVIDIA, and then if Kenya signs an agreement with the United States, you could buy a hundred thousand chips. But then if Microsoft or Google or AWS gets this universal verified end user designation, they can come in and build data centers that have even more chips. So beyond that 100,000 dollar cap, they can build massive AI identity centers, but they have to do so in a way that reduces the risk of smuggling and do so in a way that reduces the risk of Chinese espionage. And so here I think it's worth bringing in that Microsoft President Brad Smith has come out.
Mr. Schwartz: CSIS trustee Brad.
Mr. Allen: Indeed. And he's come out, and I don't know that it's fair to call this supporting the policy, but something closer to neutral, maybe soft support. And his quote, which I thought was very interesting was, we are confident that we can comply fully with this rules high security standards and meet the technology needs of countries and customers around the world that rely on us. So they're not expressing concern about this policy, at least not publicly. And part of the reason why they might want to do that is that this is now a massive incentive to buy American, at least when it comes to cloud and data center infrastructure around the world, right? Because if you're in Kenya, if you're in the UAE and you're in Saudi Arabia and you want your country to benefit from this massive AI build out expansion, if you bring in American partners or you bring in partners from a group one country, think like Misral of France for example, your ability to buy AI chips goes way, way up. And that's because the US government has confidence that Microsoft can put in place the required cybersecurity restrictions, the required physical security protections that they need to have confidence that Chinese smuggling is going to be kept at bay.
Mr. Schwartz: So Greg, this is early days. We're talking basically the morning that this ruling came out from the Biden administration. What is the expected reception among allies and partners and specifically partners who didn't make the ally list, that first list?
Mr. Allen: Well, I think there's stuff in there for them too. So it's not merely a thou shalt buy American forever. There's also, in addition to the universal verified end user, there's also the national verified end user, which is whereby say a country like Israel, which does have problems with smugglers in this country, or a country like Singapore, which does have problems with smugglers in its country, but also has some companies who are pretty dang trustworthy, they can go get certified and then they can buy a lot of chips in the same way that Microsoft can get certified. So it can't be a universal verified end user license, but it can be a country specific entity, specific type of agreement. The other thing that's in this policy that's designed to entice that group two countries is that it actually loosens the restrictions on purchasing small numbers of chips.
So if you are buying less than 1,700 H100 equivalents, the licensing process, which actually a lot of deals have been held up, it just takes a while to process all these export controls. A lot of countries have been complaining to the United States that like, Hey, we're only buying 12 chips. We know that you're not concerned about the smuggling risk associated with our buy of 12 chips. Why are you taking so long to process this license? Well, all those purchases of less than 1,700 chips have now been put in the express lane. The licensing requirements for that have been massively simplified, massively accelerated.
So those are some things in this sort of group two that I think countries are going to. And the other thing that this does is adds a lot of clarity to the situation. The US and the UAE had to have a lot of negotiation over what is it going to take to make the United States comfortable to allow these exports to go through now that they have this sort of universal policy, a lot of clarity has been brought to the situation. So I think the UAE, they found this attractive enough to basically announce that this is quotes choosing America in the competition over AI supremacy between the United States and China. And I think with this new policy offering that diplomatic carrot to a much longer list of countries, I think a lot of them are going to be pleased.
Mr. Schwartz: Alright, Greg, so we've talked a bit about the reception from industry. We've talked about the allies. Let's talk about two things that are I think critically important. We're a week out to transition of presidential power President-elect Trump will be assuming power next week. It's a totally different Congress now we've got both the house and the Senate under Republican control. Given that the Biden administration only has a week left in office and these rules came out today, what do you expect the Trump administration to do along these lines and the Congress as well?
Mr. Allen: Yeah, incredibly important question. Obviously. The Biden administration is not going to implement this rule. It's just not going to happen. They know they're punting to the Trump administration, but the way they've done it is interesting. This is coming out as an interim final rule. And what that means is that this rule is going to take effect in a matter of months unless the Trump administration blocks it. So, it's not that the Trump administration can just do nothing on this rule and it goes away. If they do nothing, it happens. They have to do something to stop it or change it if they want to stop it or change it. And here, I think it's worth pointing out, that every single AI and semiconductor export control policy adopted by the Biden administration has an antecedent in the first Trump administration, right? It wasn't the Biden administration that discovered chip export controls.
That was the Trump administration with the ZTE and Huawei export controls. It wasn't the Biden administration that discovered semiconductor equipment export controls. That was the Trump administration with the EUV restrictions done in partnership with the Dutch and the specific entity listing of SMIC, which happened in December, 2020 after the election. Right? So the Trump administration knows that you can pass big export control policy shifts after the election. They did it. They started that precedent.
I think there are folks in the incoming Trump administration, if you look at names like Marco Rubio, every time the Biden administration put out an AI and semiconductor export control package are on China, Marco Rubio in his capacity as a senator was there with a statement right away basically saying, here are the loopholes in this policy. I don't like, here's why it's not tough enough. I would be tougher. Well, now he's going to be Secretary of State, right? A very important part of the interagency process setting export control policy.
Mr. Schwartz: So he’s got his chance, right?
Mr. Allen: Yes. And you've also got Mike Waltz, right, who is a former congressman, a strong China Hawk, really skeptical of China's tech industry in so many different ways.
Mr. Schwartz: And he’s the incoming national security advisor.
Mr. Allen: Exactly. And so you can see how he would also find this policy attractive. At the same time, the Trump administration has said that they know this big AI infrastructure build out is going to happen and they want it to happen in America. Well, if you look at that universal verified end user license, there's a really interesting and important incentive to build in America because what it says is in order to get that universal verified end user distinction, which makes it so much easier to sell abroad, 50% of your total AI computing capability has to be in the United States, and no more than 7% of your total AI computing capability can be in any individual group two country. So that is basically saying build in America, which is exactly what the Trump administration has been saying they want to happen when it comes to this big AI infrastructure boom.
Now, one thing I think it's important to note here is that a lot of the complaints on the part of AI companies has been related to energy, right? We can't build all the energy infrastructure we need to run all these very energy intensive AI data centers in the United States, and that's why this policy only makes sense as part of a one two punch, right? If you do this policy and you don't do immediate energy permitting reform, this policy will be a disaster. Well, what did the AI National Security memorandum of a couple months ago call for? Massive energy permitting reform. And I've heard that that policy could come out as soon as tomorrow.
Mr. Schwartz: In which case we'll be right back in these chairs talking about that aspect of it.
Mr. Allen: Yes. So that's really big. So this is a really strong anti-China policy. This is a really strong build in America policy. You can see how the Trump administration would latch onto this as really appealing. However, there are Republicans in Congress who are loudly opposing this. Most notably Senator Ted Cruz put a post on X in which he said that this policy would crush American semiconductor leadership. That's a quote and was drafted in secrecy without input from Congress or American companies. So he's coming out loudly in opposition to this policy. He's going to try and mobilize opposition to this policy in the Trump administration.
Mr. Schwartz: Is it true what he's saying though?
Mr. Allen: So I don't think so, and here's specifically why. I've been talking to folks who are in or talk to people in Huawei's supply chain, and it's really important to ask yourself, if America doesn't sell these chips, who is going to sell these chips? And some folks would say, China. Well, the best AI chip producer in China by a long shot is Huawei, right? They have really strong chip design capabilities, but what they don't have is chip manufacturing capabilities. I found out recently that actually a significant majority of all the AI chips that Huawei has ever made were actually made by TSMC. Basically, Huawei created some shell companies. Those shell companies went to TSMC, said, hello, we'd like you to make our AI chips. I solemnly swear that I'm not Huawei in a disguise, but this chip design you might notice looks exactly like Huawei's chip design.
Well, the administration has already put a stop to all of that. I mean, TSMC has basically cut off from all of its Chinese customers when it comes to anything related to ai. But Huawei, the reason why they needed TSMC is that there aren't manufacturers in China who can make these chips. So the best logic chip manufacturer in China is SMIC. SMIC can make seven nanometer chips, but how much capacity do they have for seven nanometer chips? It's actually only 20,000 wafers per month. That is not a lot of chips. They're actually bottlenecked, not just by a shortage of DUV lithography equipment, but they're actually bottlenecked by an inability to get advanced etching equipment and advanced deposition equipment. This is the exact result of the export control strategy that the Biden administration and the Trump administration have pursued trying to block SMIC from getting these advanced machines.
So if you think about how many chips Huawei can make and how much that seven nanometer capacity is needed, they need that production line to not just make AI chips, but to make phone chips, to make laptop chips, to make data center chips. They can't make anywhere even close to as many AI chips as China needs. So the idea that Huawei, in a matter of years is going to go around the world and start exporting these Huawei ascend chips, it's just not credible because that's exactly what the export control strategy was designed to do, and that's exactly what it has succeeded in doing. The alternative to buying American AI chips and taking this deal is not Chinese AI chips. The alternative to taking this deal with America is no AI chips. Wow. Most countries are not going to find that as a very attractive offer.
Mr. Schwartz: No, zero chips is not an attractive offer. Greg, this has been fascinating. I know we're going to get together again this week to talk about more of this. So thank you for now, and to our listeners, want to give a big shout out to the first responders in Los Angeles and to all the brave people who are helping and fighting the fires out there. Also want to give a shout out to victims who are really struggling, and there's lots of places to donate. The Red Cross is one of them. Please help our fellow countrymen out in Los Angeles. So, thank you very much to be continued.
Mr. Allen: Thank you. Take care.
Outro: Thanks for listening to this week's episode of the AI Policy Podcast. Be sure to subscribe on your favorite podcast platform so you never miss an episode. We also love to hear from you. So reach out at AI Policy podcast@csis.org with your suggestions and feedback. Finally, don't forget to visit our website, csis.org for our latest research reports and events. This podcast was produced by Cera Baker, Isaac Goldston, and Sadie McCullough. See you next time.
(END.)