AI 2023 Year in Review
Available Downloads
This transcript is from a CSIS podcast published on January 23, 2024. Listen to the podcast here.
Gregory C. Allen: Welcome to the AI Policy Podcast, a podcast by the Wadhwani Center for AI and Advanced Technologies at CSIS. I'm Gregory C. Allen.
Andrew Schwartz: And I'm Andrew Schwartz.
Gregory C. Allen: Join us as we dive into the world of AI policy where we'll discuss the implications of this transformative technology for national security, geopolitics, and global governance.
Andrew Schwartz: So Greg, I'm totally psyched. This is our first AI policy podcast, the first of many. You and I are going to be doing these twice a month and really getting down into policy on AI and how it's affecting government, industry, everything. And this is the place that people are going to learn because you're the guy on this stuff. So I'm psyched.
Gregory C. Allen: I'm really excited. We've done a few of these on your podcast over the past year and a half, and they've just been so great that now we're going to do it all the time.
Andrew Schwartz: That's right, exactly. So it feels like there was not one year, but 10 years worth of news packed into 2023 because AI is accelerating so fast, we always talk about it. In comparison to Moore's Law. How do you unpack all that happened in this last year?
Gregory C. Allen: I mean, you're totally right. It really does feel like 10 years worth of stuff happens. I would say having been sort of all time on the AI policy beat since 2016, it feels like as much happened in the past year as happened in the preceding five. It was just extraordinary. As I try to think about what happened in 2023, which was this extraordinary year, I think the easiest starting point is the technology itself, which changed so much in terms of what it could do, but also who was using it. So in January of 2023, chat, GPT, which had previously launched in November, announced that they had a hundred million monthly active users.
Andrew Schwartz: And that's globally.
Gregory C. Allen: That's globally. And that made them the fastest growing consumer technology product ever. That was faster than the adoption curve for the iPhone, faster than the personal computer is incredible.
Andrew Schwartz: And that's when most of us really started to get interested and aware of just how fast all of this was moving forward.
Gregory C. Allen: Yes, And it was a different approach to AI, because most people had had some experience with AI, technological capabilities. If you're using face recognition to unlock your phone, that's a machine learning capability. Five years ago, everybody would've called that ai, but what chat GBT and these other large language models do is they give you an AI experience that's kind of much, much closer to AI in the movies. You're actually conversing with an entity. It's giving you information in this conversational manner and it really reset everyone's expectations for what AI is, and what it could be, and who it could be for. Because now it was so widely available.
Andrew Schwartz: I mean, all of a sudden that's all anyone was talking about.
Gregory C. Allen: Yeah, it really sucked up all the oxygen in Washington DC and in capitals around the world.
Andrew Schwartz: So that happened in January. What happened next?
Gregory C. Allen: Well, it didn't stop there. So in January, that was the old version of chat GPT based on GPT-3 0.5 in March, OpenAI, the company that makes chat. GPT released its updated version GPT-4, which was significantly better. It was just a smarter, more effective AI agent. Now that's all in text, but then later in that year chat, GPT added image recognition capability. So you could scribble a note on paper, upload that image, and the AI system would talk to you about what you had put in that picture and it could generate images and other companies replicated this capability set. Google DeepMind more recently announced their Gemini model, which is also multimodal. So we've gone from this really exciting text interface that hundreds of millions of people are using every single month to now it's doing so many more things and barriers that looked really tough not that long ago. So AI has been pretty good at image generation for a few years now. Just in November of this year, there was this set of breakthroughs around video generation such that you can create really cinematic, really high-quality looking video, mostly in short clips, but it's just something that looked so hard, seemed like it might be many years away, not that long ago. It happened in 23. And the pace of progress just shows no signs of slowing down.
Andrew Schwartz: So now we've got AI generated video that is more available to a mass audience. What does that really mean going forward? I mean, that obviously brings up a lot of risks. We talk a lot about existential risk, which has become a major policy issue in 2023. What does that term mean and what has to happen with regulatory and legislative approaches to address it?
Gregory C. Allen: Yeah, 2023 was the year that governments around the world got serious about AI regulation, and the existential risk community who are really focused on the catastrophic risks of AI for what happens when ais are smarter than even the smartest human being. How do human beings remain in control and maintain that their lives and freedom are preserved in that type of future? That's stuff that folks like Elon Musk have been talking about for a long time now, but it was still sort of adjacent to the policy conversation. Definitely a niche within a niche. I think all of that changed around May 30, which we talked about before, when the Center for AI Safety released a letter that said that managing the existential risk of AI should be a global priority similar to that of nuclear weapons and global pandemics. That letter had an extremely long list of signatories that were extremely illustrious people. We're talking the CEO of OpenAI, Sam Altman, the CEO of Google DeepMind, Dennis Hasis, and a bunch of leading researchers and business executives from this field along with other philosophers. And when you compare AI technology to nuclear weapons, you get policymakers attention. Definitely. And not long after that, you saw a bunch of stuff happen very quickly on June 21st, Senate majority leader, Chuck Schumer came here to CSIS-
Andrew Schwartz: Right with you?
Gregory C. Allen: Yes. That was a very exciting day for us here at CSIS. And he announced that he had his safe innovation framework for AI and was working with a bipartisan group of four on comprehensive AI legislation. And then just a month after that, the White House convened a group of technology executives of the leading AI companies where they announced voluntary commitments related to AI safety, and it just kept going from there.
Andrew Schwartz: So I was going to ask you, I want to talk a lot about how policymakers reacted to the speed of all this, but first, let's talk about how business and investors reacted.
Gregory C. Allen: This was the year that if you weren't talking about AI, you were almost in trouble with your investors because-
Andrew Schwartz: You were way behind. Exactly. You didn't know what you're talking about if you didn’t talk about that-
Gregory C. Allen: Every company sort of was operating for the assumption that either we are currently have a good AI strategy or we're going to be put out a business by somebody who has a good AI strategy, and there's some really interesting data points on that. I think the first one is that studies of CEO investor relations calls. So when they talk to the investment bank analyst community who says whether or not their stock is a buy or a sell, discussions of AI went through the roof such that almost every executive in the Fortune 500 companies was mentioning AI multiple times on their calls, whether or not their company was currently using AI. They felt a need to say that, yes, this is something we're going to do.
Andrew Schwartz: So if you don't say the word ai, you're not hip, you're not part of the crowd, you're what?
Gregory C. Allen: Well, you're clearly pursuing a strategy that is not reflecting technological reality, I think is that sort of thought process and then the second thing that happened, which is just astonishing, so there's this group called the Magnificent seven. This is meta, Amazon, apple, Microsoft, Google, Tesla, and Nvidia. And those seven tech giants, each of whom have a big slice of the AI pie, those seven stocks make up 29% of the value of all 500 companies in the s and p 500, which are some of the biggest and most successful companies in the entire all of global industry and especially American industry. And not only that, but when you think about stock performance in 2023, Goldman Sachs found that those 7 companies were responsible for 71% of the growth of the entire s and p 500, while the other 493 stocks only made up 6% of the growth. So it's like the companies that are really poised to benefit from the AI revolution, they're going gangbusters and everybody else is sort of treading water, and that's just an incredible story.
Andrew Schwartz: So why wouldn't we all just invest in them and then nothing else?
Gregory C. Allen: Well, as an empirical outcome, that appears to be what's going on. So you've basically, investors are pouring into these technology giants and then every other CEO is saying, no, don't worry. We're going to figure out this AI thing. We're going to make money off this too.
Andrew Schwartz: Thus everyone talking about it on their investors' calls. Got it. Okay. So that puts a bow on some of the things that are going on in the industry. But I want to break down the policy conversations into a couple different buckets. What happened on AI regulation? Let's start with that.
Gregory C. Allen: So we talked a little bit about how that letter changed everything and a lot was getting going in the summer here in the United States. That really did culminate in really important outcomes, really important policy decisions. So in October, the White House passed a new executive order all about AI. And in conversations with folks involved in this policy, they basically describe it as this is the absolute maximum that we could do within the White House's legal authorities. Anything more than this is going to require legislation. And so I think that sort of says something in and of itself. They put the pedal to the floor and did everything they could possibly do related to AI regulation. That includes for the folks who are making the largest AI models, they're now going to be subject to red teaming. So there's going to be independent assessments of the safety and the efficacy of the safety procedures of these companies for the largest AI models. And then for AI in specific use cases in various federal agencies, there's sort of other requirements and regulations that are now going to apply. So that's what the White House was doing. I mentioned that Senator Schumer started his safe AI innovation framework. He's been holding the series of AI insight forms, and I had the opportunity to participate and testify in one of these forums, but they've already had nine of these and they convened some really remarkable individuals. But it's a private hearing, not a classified hearing, just a private hearing. And the point of that-
Andrew Schwartz: So this isn't the kind of thing you see on C-SPAN?
Gregory Allen: This is not the kind of thing you see on C-SPAN. For one, the number of participants is much larger for another thing, the senators are genuinely there to learn. When I was there, there were senators up at the speaking table, but there was also senators in the audience just sitting there.
Andrew Schwartz: Which is remarkable.
Gregory C. Allen: Which is remarkable, the number of times that I can think where a senator is sitting in the audience of something for three hours just to learn.
Andrew Schwartz: Right? This isn't a committee. They're not obligated to be there. They're there because they need to know what's going on in this space.
Gregory C. Allen: Exactly. I mean, there's really interesting education process that is taking place. There's multiple members of Congress right now who are literally pursuing graduate degrees. My own congressman, Don Byer of Northern Virginia is taking classes at George Mason University in pursuit of a master's in artificial intelligence because the legislators all see this as the transformative technology of the 21st century. And they recognize that there's this real knowledge gap between how they understand the technology and what the American people are asking from them and what this moment is asking from them.
Andrew Schwartz: So this is such a contrast to when we had hearings years ago on Capitol Hill where members of Congress were asking the executives at Facebook, if I friend somebody, what does that mean? How does that happen?
Gregory C. Allen: And a little bit more painfully, when one member of Congress asked, well, if you don't charge for your services, how do you make money? I mean, literally, they're ignorant of the most basic features of technology platforms and seemed kind of comfortable with that. Whereas with AI, they're profoundly not comfortable with it. There really is this kind of exciting self-education moment going on, and Schumer's work is emblematic of that. When he was here at CSIS, he said the reason they were doing these insight forums is because they felt like the traditional committee hearing process wasn't going to move fast enough, wasn't going to involve enough people, and wasn't going to result in the broad sort of learning that they needed to actually have a shot at drafting good legislation on this topic.
Andrew Schwartz: So in your view, 2023, this is off to a good start in terms of education and in terms of policymakers really taking this on and taking it seriously?
Gregory C. Allen: Absolutely. I mean, when I would try and have conversations with folks on Capitol Hill in 2017, you could have a conversation about AI and national security or AI in the military, but to try and have a conversation about what the government should do in terms of AI regulation or AI promotion and adoption across government agencies, it was just tough to get people to care that way-
Andrew Schwartz: Right because they still thought it was science fiction at that.
Gregory C. Allen: Yeah, exactly. And now it feels real and they've interacted with it. They've used these systems because it's not just the enterprise technologies that are for businesses. The consumer AI systems are so interesting and so compelling. Okay,
Andrew Schwartz: So we're talking about what happened in 2023 and that's going on in Congress and in the administration. What about internationally? Didn't the G7 take this on as well?
Gregory C. Allen: Yeah, the G7 did some really interesting stuff with AI. So Japan was the president of the G7 in 2023. And what's interesting just in international relations writ large is as U.S. Russia tensions and European and Russia tensions have gotten more difficult as U.S. China tensions have gotten more difficult. The sort of locust of international convenings is sort of moving away the G20, which had been an important body for about a decade and more to the G7 where the sort of advanced economy, advanced technology democracies sort of talk to each other. And in this regard, they recognized there was important regulatory work going on in the European Union, in the United States, in Japan, and elsewhere on artificial intelligence. And they really wanted to get their ducks in a row and coordinate these types of efforts. And there's a bunch of commitments that came out of that G7 one that I'll highlight that I thought was sort of especially interesting was a commitment to regulatory interoperability. So they basically say as we're all designing our regulatory regimes for AI, we don't want that to present a barrier to trade. We don't want that to present a barrier to research collaboration. And so they're working together on the regulatory standards that are going to underpin all of this work because they want to continue collaborating the AI. And I thought that was a pretty appropriate sort of thing for the G7 to do.
Andrew Schwartz: Okay, so then there's some other countries that are really taking a lead on this as well. What is the United Kingdom doing?
Gregory C. Allen: This was something that was really interesting. So the Prime Minister, the UK, Rishi Soak, basically determined that artificial intelligence was going to be part of his legacy as prime minister. So the British organized this AI safety summit held at Bletchley Park, the same Bletchley Park where Alan Turing built the computer that broke the enigma codes during World War II, usually referred to as the first digital programmable computer on Earth. So, there was a lot of significance and impact in terms of the UK's leadership in the history of computing and now leading this conversation on AI safety and that AI safety Summit, one sort of openly acknowledged the existential risk conversation. They explicitly brought in China to be a part of that because if China develops an AI that destroys the whole world, well that's also bad. So they need to be a part of this AI safety conversation was the British logic there. And it resulted in the Bletchley Declaration, which among other things openly acknowledged the potential for catastrophic risk of artificial intelligence and got dozens of countries to commit that they were going to have appropriate safeguards on their development and use of artificial intelligence, especially the sort of frontier upcoming systems.
Andrew Schwartz: So this was different than what other countries are doing? The UK really did something here that you think is unique?
Gregory C. Allen: It was a big convening and it was a big diplomatic achievement, I would say for the UK because that is now the baseline for international conversations around artificial intelligence. And of course, the UK builds off what had been accomplished in the G7, but it's very interesting that existential risk, which was something you would almost say in hushed tones in the government in 2017. Now, catastrophic AI risk is part of this declaration that was signed by dozens of countries.
Andrew Schwartz: So along those lines, China participated in this exercise with the UK and other countries. What did you make of their participation?
Gregory C. Allen: I thought it made sense. There's certain types of interactions with China diplomatically that are kind of frustrating because they often view their participation in these sorts of things as a concession diplomatically and that they ought to get something in return, whereas I think the U.S. and the British would just sort of say, no, this is part of you being a responsible stakeholder in the international system. As somebody who was involved when I worked at the Department of Defense and before in international conversations around military AI safety, I would often notice what was coming out of the mouths of Chinese diplomats and what was the actual behavior of the Chinese military was a big difference. And so I'm always a little bit cautious about giving China too much credit for what they say in these sorts of settings. We always also need to pay attention to what they're doing. But in terms of just making sure that China understood, this is actually the consensus of the global AI research community, that there are legitimate risks here, that they do deserve some kind of appropriate safeguards. I thought that made total sense.
Andrew Schwartz: So then in December, something else happened, the EU passed the EU AI Act. What's that and what does it do?
Gregory C. Allen: So the EU AI Act, which has been in the work for years, and this is actually an instance in which the regulation kept getting tweaked because the technology kept changing when they were originally envisioning this big AI regulatory effort. Large language models really weren't the principle focus of conversation, but this is a legitimate horizontal regulation and it breaks AI into different risk categories. AI for biometric surveillance is very different than AI for recommending what movie you should watch next. But the point is, depending on which risk category you fall under, there are some very significant regulations that might apply to you. The most severe restrictions can find you up to 7% of your global revenue as a company. So that type of thing really gets folks attention and-
Andrew Schwartz: I was going to say a lot of ears perked up with that.
Gregory C. Allen: Yes, and definitely. And so while they finally did manage to pass it in December, implementation is still going to be really tough over the next few years. They're trying to set up a new organization who's going to be charged with enforcing this. They have to figure out how all the member countries are going to get on board. France in particular, president Macron has sort of said, why are we tying our hands? We're trying to run faster.
Andrew Schwartz: Well, that was my question is this is the most comprehensive set of AI regulations we've seen so far, but it's the EU administrating other EU countries. Doesn't that tie their hands?
Gregory C. Allen: Yeah. So the classic cliche is that the United States can innovate, but they can't regulate, and the Europeans can regulate, but they can't innovate. You can definitely see a way in which the EU AI Act plays into that story. The official line from the European Union is that they are regulating in order to innovate and just to play out what that means. Ultimately, folks are going to want to use AI systems that are safe, that they can trust, that are going to protect their privacy. And the European Union is saying, we're going to force our companies to figure out how to do that. And then that will ultimately be what consumers and companies want. That sort of remains to be seen whether that hypothesis is true, and it's a debate that is still going on within European Union, but at this point, this act is law, so it's something is going to be coming out of this.
Andrew Schwartz: So they're trying to make the Wild West a little bit less wild with this?
Gregory C. Allen: Certainly. Yeah.
Andrew Schwartz: Greg, speaking of the Wild West, I want to talk about AI and national security. 2023 was also a really big year in that regard. What were some of the most noteworthy developments that you saw?
Gregroy C. Allen: When I was in the Department of Defense working on AI from 2019 to 2022, the main thing that we were working on was AI for a bunch of different use cases like predicting maintenance requirements in vehicles or doing image recognition from ISR platforms. These were sort of big focuses of AI. In 2023, the real conversation was around autonomy and autonomous systems, so planes that fly themselves, tanks that drive themselves, these types of things. And two sort of really big things happen. The first is in January, the DOD updated its policy for autonomy and weapons systems, DODD 3000.09, and it was very clear that one of their goals in updating this policy was to one, squash the misunderstandings about what the policy does and does not say it is often reported as requiring a human in the loop or banning autonomous weapons. And it does neither of those things. And I think in clarifying what DOD policy was on autonomous weapons and autonomous systems writ large, one of the goals there was that the DOD would build more of them. And later that year in August, deputy Secretary of Defense, Kathleen Hicks announced the replicator initiative, which was a promise to do just that. And not only to do that, but to do that in massive scale. She spoke about building thousands or tens of thousands of autonomous systems and deploying them into combatant commands in a timeframe of 18 to 24 months.
Andrew Schwartz: And as we close the year, that's back in the news again.
Gregory C. Allen: Back in the news again, because it's not going as fast as she had hoped. So when she had originally this, she said that it was not going to require new money. That the DOD was going to be able to move money around to support existing initiatives and double down on them. I think there's now some skepticism from Congress about what is the real budget plan for this. If there's not a part of the budget that says it's going towards replicator programs, is it even really real?
Andrew Schwartz: Hard to believe that there's not needed additional monies for this?
Gregory C. Allen: Yeah, and I'm speculating here. I think Deputy Secretary Hicks was a little bit hamstrung by the budget cycle because the budget was significantly delayed in 2023 for the Department of Defense and for the government writ large. And so the budget had just been submitted to Congress only a couple months earlier. And so for Deputy Secretary Hicks to say, actually, there's one thing I forgot to put in there, this big new program I want to launch called Replicator, perhaps that would not have looked great. And so she announced it without announcing the funding line. But I think for the FY 25 budget submission in DOD I would be really surprised if there were not multiple programs there explicitly identified as related to the replicator goals,
Andrew Schwartz: What are some of the other things that are going on at DOD and also at the State Department?
Gregory C. Allen: Well, I mentioned 3000.09, so that is the DODs policy on autonomy and weapons systems. There had already been going on discussions in the United Nations around autonomy and weapons systems with multiple countries, including Austria, which has sort of been historically neutral for decades, talking about just outright banning autonomous weapons systems, the DOD and the United States government has never been in favor of that ban policy. What they had been in favor of was something more closely resembling a code of conduct or an agreement about what are legitimate uses or responsible uses of autonomous systems, sort of reflecting what they had done in 3000.09. And so in February, you can sort of think of the DOD as taking 3000.09 international when the U.S. State Department and the DOD sort of jointly announced this political declaration on responsible military use of artificial intelligence and autonomy. And that now has the signature of more than 45 countries,
Andrew Schwartz: Right? They're calling for all nations to sign on this?
Gregory C. Allen: Yes, that is the goal. Friends and foes alike, and they've gotten 45 countries to sign on to this political declaration. And I think that is sort of where DOD wants the international conversation to go away from this UN conversation that is really focused on a ban and more on this sort of look, this is what we think responsible use of autonomous weapons are. Clearly there are ways to use autonomous weapons that would be pure evil, just as there are evil ways to use any weapon including a hammer. The question is, are there ways that are consistent with ethical obligations under the law war to use these systems? And the DODs answer is yes. And apparently the answer of nearly four dozen other countries is also, yes.
Andrew Schwartz: What are some of these countries we're talking about in the four dozen group, and what are some of the countries that still really need to sign on?
Gregory C, Allen: So there's a lot of countries who are traditional U.S. allies, but it actually goes quite a bit beyond that. I would struggle to sort of name names off the top of my head, but it does not include China or Russia. Notably-
Andrew Schwartz: Those are the keys that haven't signed it?
Gregroy C. Allen: Yes. And there were some rumors circulating that President Biden, when he met with the General Secretary of the Chinese Communist Party, Xi Jinping in San Francisco for the APEX Summit, that military AI was going to be a part of that conversation. No reports have really come out as to whether or not anything significant on that topic took place there. This was something that when I was in DOD, we had tried to get the Chinese military to talk to us about multiple times, and they had always refused to have that conversation. Which was frustrating because the United States and the Soviet Union during the Cold War had conversations that genuinely contributed to global safety in the benefit of both countries. And China just sort of doesn't view it the same way as the Soviet Union did. They're much more predisposed to just not talk than to talk, but to represent your country's own interests.
Andrew Schwartz: So as a goal of our policy going into 2024, and we're going to talk about that in another podcast, but we're going to continue trying to get them to engage on this, I'm assuming.
Gregory C. Allen: I think that's right. And then the traditional Chinese response would be, well, we don't want to talk about it. You do want to talk about it. So if we're going to give you what you want, what are you going to give us that we want? Which is so frustrating, but it's a classic Chinese negotiating tactic.
Andrew Schwartz: Alright, let's talk about export controls. 2023 saw a significant shift in U.S. and allied export controls on semiconductors and related equipment. What was that shift and how does it relate to China's AI ecosystem?
Gregory C. Allen: So the U.S. and one of those magnificent seven tech companies that we were talking about earlier, Nvidia is the sort of dominant global supplier of the chips that are used to train AI systems and the chips that are used to run AI systems, whether in the cloud or anywhere else. And in October, 2022, the U.S. government sought to cut China off from the most advanced versions of these chips in kind of a deliberate effort to hamstringing China's future progress in AI technology. Well, in 2023, a lot happened in this story. So in January of 2023, it was reported that the United States had reached an agreement with Japan and the Netherlands on this topic and that they were going to jointly restrict not just the sale of the chips, but the sales of the equipment that can used to make the chips. And the United States, Japan and the Netherlands combined produced more than 90% of the types of equipment that are used to make these advanced AI semiconductors.
Gregory C. Allen: And then those export controls were announced on the part of the Netherlands and Japan in March took effect in Japan in July. And I think in the case of the Netherlands hook effect in September. And then just most recently on October 17th, 2023, the sort of one year anniversary of the October, 2022 export controls, the Biden administration announced updated AI chip and semiconductor manufacturing export controls. And I think what's super interesting is if you look at what was in the Japanese policy for semiconductor manufacturing equipment and what was in the US policy for semiconductor manufacturing equipment, it's often a one-to-one alignment. In some cases, they're literally using the same text. It's clear, even though there's no formal acknowledgement of an agreement on this topic, it's very clear there was coordination on this policy between the various countries. And so the hope is that this is going to restrict future progress in China's AI sector and in China's semiconductor sector. But you may recall, in August, Secretary Raimondo went to China and got a big surprise.
Andrew Schwartz: And that surprise was?
Gregory C. Allen: So she was there to talk trade in all manner of things with the Chinese government as part of the sort of timeframe when the Biden administration and the Xi Jinping regime were sort of seeking to improve the quality of U.S.-China relations. Well, while she was there, Huawei announced its new mate, 60 smartphone, which inside had a seven nanometer chip manufactured by Smick, the sort of top Chinese logic semiconductor manufacturer. And this was very clearly intended to humiliate Gina Raimondo and very clearly intended to signal to the United States that not only were the export controls not working because China was continuing to make progress in semiconductor manufacturing, but also that China was just no longer interested in having a conversation about compliance with us export control law, that the conversation they wanted to have was one about technological power. And they felt pretty good about their standing and technological power because they
Andrew Schwartz: Because they had caught up.
Gregory C. Allen Not caught up, but they had exceeded the technologies that the export controls were designed to prevent them from reaching. I see. Yeah. So the export controls say that China cannot buy manufacturing equipment if that equipment is going to be used to operate a facility for making chips at the 14 nanometer level or more advanced and more advanced in this case means a lower number. And China is saying, not only are we still making 14 nanometer chips, we're actually making progress in seven nanometer chips. So at the same time, you're trying to set us back, we're still marching ahead and even if we're doing so in a way that breaks U.S. law tough, that's what we're going to be doing.
Andrew Schwartz: Okay. So given all that, what did we then do and what are other countries, what do they need to do to make these export controls more effective?
Gregory C. Allen: Well, I think the October 17th export controls, which happened a couple months after Secretary Armando's trip to China, I think they probably got a boost over the finish line by that trip. I can't imagine the Secretary Raimondo felt great after the Chinese had deliberately intended to humiliate her on a trip that was supposed to improve relations. And the October 17th export controls are designed to close some of the loopholes that China exploited in the earlier design in order to reach that seven nanometer manufacturing milestone. And with respect to the chips themselves, Nvidia had been selling a degraded version of their product that was compliant with the 2022 export controls, and the United States actually lowered the performance threshold. So now every company in China that is doing AI software has to use inferior chips because that's what is allowed to be sold to China. And the domestic replacements within China aren't that good right now. And so this is sort of where we are. The United States is clearly inflicting pain upon China's AI and semiconductor ecosystem, but China is also clearly not going to back down that pain is not yet leading them to change any of the behaviors that the United States is upset with.
Andrew Schwartz: Greg, that's 2023 in a nutshell. In the next episode of this podcast, we're going to talk about what we have to look forward in 2024. Do you want to give us a little bit of a preview there?
Gregory C. Allen: I think there's a story where 2024 is every bit as jam packed as 2023, and really across all three of those same dimensions, there's so many things that have yet to happen. Is Senator Schumer and his bipartisan group of four going to manage to pass comprehensive AI legislation? How is the EU going to implement the AI Act? What is going to happen with replicator and military AI competition? And are the revised versions of these chip export controls actually going to stop China's AI growth? These are all big, big questions and very powerful muscle movements in institutions around the world and governments around the world are trying to make them happen.
Andrew Schwartz: Greg, I'm excited to keep talking about this with you. This is really an interesting and ongoing conversation we're going to be having.
Gregory C. Allen: Well, thanks and I look forward to doing this every couple of weeks. Thanks for listening to this week's episode of the AI Policy Podcast. Be sure to subscribe on your favorite podcast platform so you never miss an episode. And don't forget to visit our website csis.org for show notes and our research reports. See you next time.
(END.)