Surveying the Future of U.S. Open Foundation Model Policy

Available Downloads

This transcript is from a CSIS event hosted on March 21, 2024. Watch the full video here.

Gregory Allen: (In progress) – especially grateful to have Alan here today, because on February 21st they announced a request for comment on the risks, benefits, and potential policy related to advanced artificial intelligence models with widely available model weights as part of the mandate under the White House Executive Order on Artificial Intelligence. And Alan, as the assistant secretary of commerce and NTIA administrator, oversees this agency which has more than 500 employees working to close the digital divide, manage federal spectrum resources, and build a better internet. And Alan has spent the past 25 years working at the intersection of internet technology, public policy, and the law. And so I’m so delighted to welcome him up here to make some opening remarks, which will be followed by our panel.

Alan, please. (Applause.)

Alan Davidson: Well, thank you. Thank you, Greg. Really appreciate that. And so appreciate the opportunity – the opportunity to be here. And many thanks to CSIS for hosting this event. We were talking just beforehand about the importance of bringing in the broader community of folks who CSIS works with and reaches. And so I’m very grateful to be here. Really appreciate the folks who helped put this together, and especially Jim Lewis here, who I’ve known since he was at the Commerce Department, actually. (Laughs.) And the rest of the team, thank you so much for having us.

So, as Greg was saying, at NTIA tech policy is what we do. And our focus generally has been making sure that important new technologies – whether from broadband, to spectrum, to emerging innovations like AI – are all developed in the service of people and human progress. Nowhere is that more important today than in the explosive growth of artificial intelligence systems. We know that responsible AI innovation can bring enormous benefits to people. It’s going to transform every corner of our economy, from advances in medicine to precision agriculture.

But we’re only going to realize the promise of AI if we also address the very serious risks that it raises. And not just risks in the future, risks that it raises today. And the Biden-Harris administration is moving with great urgency to engage on those issues. President Biden’s AI executive order, the most significant government action to date on AI – I’d say the most significant anywhere in the world – brings the full capabilities of the U.S. government to bear in promoting innovation and trust in AI. The Commerce Department is playing a leading role in the administration’s work on AI. And we, at NTIA, are trying to do our part.

And we’re here today, as Greg said, because the AI executive order gave us an important homework assignment: Weighing the benefits and risks of these important dual-use foundation models with widely available model weights. That’s kind of a mouthful. But these open-weight models, as we’ve been calling them, these advanced AI models with key parameters that are made openly available, those kinds of models raise very serious and interesting public policy questions. We’ve already heard about the potential benefits, marginal benefits, that come from offering these open-weight models. Those benefits could include increased competition, innovation, improved research capacity, maybe greater security. All of that comes from greater access to these models.

But we’ve also heard that this type of openness can create new risks, including hindering efforts to control the misuse of these powerful new systems. This kind of openness can also make it more difficult to hold to account those who would use these systems for harm. So in that context, we’re trying to better understand the landscape of these difficult policy issues. And we’re seeking broad input. Just last month, we put out a public request for comment.

One thing that we’ve already learned is the importance of focusing – and I hope we’ll talk about it today in the panel – on focusing on the marginal or differential risks and benefits of open weights. We need to measure the risks of open model weights relative to the risks that already exist today from widely available information that might be already out on the internet or from closed models, right? So what we really need to understand is how much of a difference does it make if we make these models open?

I will say we’ve also been really encouraged so far to hear that the choices in front of us might not be binary. That this isn’t necessarily about open versus closed but, in fact, there’s kind of a gradient of openness that we need to consider. And that can itself offer more options in the policy space. Anyway, I hope today’s conversation will dive into some of those questions. And I’d say we’re particularly interested today and hearing about the international implications of these systems and the national security considerations that are raised by widely available weights.

For example, I’m hoping we’ll talk about, you know, what do you all on this panel and then the audience, the questions you have, what do we see as the marginal risks and benefits generally associated with these kinds of widely available model weights? How should we think about those risks and benefits from an international perspective? How might foreign adversaries use open weight models to exacerbate security risks? And how do those kinds of marginal risks compare to closed models? And on the other side of the equation, in what ways could open model weights support U.S. national security interests?

There’s a lot to talk about. As you can see, we’re digging into this topic. And we have a very tight deadline and doing so. Our report to the president is due in July, and our comment period ends on March 27th, so very soon. We hope you’ll all participate. And operators are standing by. Please send us your comments. (Laughter.)

So to close, let me say, once again, very grateful to CSIS for hosting us today. Thank you all for your continued partnership and engagement on this issue. It’s an important moment. This is important work. NTIA is very happy to be doing it. And we look forward to delivering policy recommendations to the president, to the country, that protect America’s security and also promote innovation and competition in new AI systems. So thank you. (Applause.)

(Break.)

Mr. Allen: Great. Well, thanks to Administrator Davidson for those incredible remarks to help get us started today. The discussion that we’re going to have is really around these issues, and survey the landscape of thinking on this topic. And to begin, I wanted to make sure that we understand precisely what we’re talking about today because the AI executive order terms what many will colloquially call open-source models – open-source AI models, dual-use foundation models with widely available model weights. So that is a rather specific term that is only a subset of the entire universe of open-source, only a subset of the entire universe of AI technology.

So can you just help us understand – and, Travis, I’d like you to begin here. But before we do that, I’m neglecting to give you all an opportunity to introduce yourselves. So instead of answering the question I was just talking about, would you please do that? And if we could just go down the row, that would be great.

Travis Hall: Absolutely. First off, thanks to you and to everyone for helping to organize this event. Very excited to be here. My name is Travis Hall. I am the acting associate administrator for our Office of Policy Analysis and Development. I’m overseeing this work. And I have been with NTIA for seven years. And I’m really interested in the conversation and hearing from all of you. So I will be, of course, talking, providing, like, as much insights as we can. But really, what I’m here is hopefully to elicit response.

Steve Lang: Good afternoon. Thank you for the invitation. I’m excited to be here. I’m Steve Lang from the Department of State’s Bureau of Cyberspace and Digital Policy, deputy assistant secretary there leading a team that works closely with our critical and emerging technologies special envoy’s office on issues related to AI in forums like G-7, G-20, OECD, and others.

Aviya Skowron: My name is Aviya Skowron. I am the head of policy at EleutherAI, which is a nonprofit AI research lab which releases open-source large language models and conducts, entirely for research, is open source and on open model weights. So I represent the people actually doing the stuff that we’re talking about. (Laughter.)

Steve Kelly: Very good. I’m Steve Kelly. I’m the chief trust officer at the Institute for Security and Technology. IST is a 501(c)(3) nonprofit think tank based in the San Francisco Bay area. And we kind of bridge between technologists and policymakers on challenging security and technology issues. And pleased to be here. And I’m a fairly recent departee from the U.S. government, having served in this administration as a senior director on the NSC staff up through the end of July. And so I guess in some part, I’m also representing a civil society viewpoint on this topic. And I look forward to talking about some of the research that we’ve done.

Mr. Allen: OK, great. And because I got our order mixed up there, I want to return to the question that I was originally asking you, which is sort of what subset are we focused on here?

Mr. Hall: Yeah, no, absolutely. It’s a great question. So there’s – and I’ll take in two parts, right? The first part is the dual-use foundation models, right? So dual-use foundation models has a specific definition in the executive order. And I wish my brain functioned such that I could actually rattle it off. I can’t. (Laughter.) Essentially, though, it is truly the most – the most advanced models, as the current – in the current state of things, right? Like, we might get to a point where they’re the mundane models in terms of how we’ve just – how the executive order has defined it. But right now, it’s truly the most advanced models, right? It’s the ones – and it’s also models that are not created for a specific purpose. They’re models can be used for a wide range of uses and activities.

And so there are models that currently do fit the dual-use foundation model definition, but there are not many. And most – and these are models that would require quite significant resources and investment in order to actually create. So already, we’re not talking about all the models, right? We’re talking about a fairly small, small subset – perhaps, again, more as we move forward with the technology. We’re right now a very, very small subset of the models, truly the most – the most – the ones on the bleeding edge of the frontier.

And second, we’re talking about widely available model weights. And model weights are some of the key components that make these large language models and these other systems what they are, right? They are – like you – you know, they are what – after the training has been done, what actually – what actually weights the different, you know, in terms of what the inputs come in – actually, like, help – is what makes the model respond the way it does, right? And so there – but it is only one particular component, right? You also do have that training data. You do have the, the code that it trained on. There are other pieces to that puzzle. And we’re really just focused on this one specific piece, the model weights.

And we’re – and we’re talking about wide availability, right? So we’re not talking about just simply, like, only one person has it, right? Like the Coke – the Coke formula, right? Where it’s only, like, one person has this – the secret. But that it’s not – so there can be some availability, but it’s – we’re talking about wide availability. Now, that means that we’re not really talking about open source, per se, right? Because open source has a long history of being a licensing regime that is built on a number of principles around openness, transparency, sharing, like, non-commerciality, like, in some – in some cases, right?

Like it’s a modular licensing regime for software. And widely available model weights can happen without licensing, right? You can have that – you know, those model weights available without actually going through the open-source licensing regime. And I’m aware that the open-source community is currently thinking through how its licenses should apply to these models. So while there are lots and lots of really important lessons to be learned from the open-source community and from our experience with open-source software, and there is an active engagement with – from that community with this particular problem – they are slightly separate. And licensing is actually one of the potential mitigation tools that can be used around surrounding the availability of model weights.

Mr. Allen: Well, that was a 10 out of 10 explanation for why we’re laser-focused on this sort of niche in here. I do want to sort of bring in the international context in this story. So while we all agree with Administrator Davidson that the AI executive order is the most substantive act in the entire world on AI governance, there is also the European Union AI Act, which does weigh in on the idea of open source. But not in a definitive way, because they state that the open-source systems will be exempt from some regulations, unless they pose a systemic risk. And that is not defined as, you know, how do you know when they pose a systemic risk? So, in a sense, they have punted on the question of what to do about open source, whereas you have had this request for comments to try and figure out what exactly we’re going to do.

So why would they punt? Why would you ask this question? I think one of the first set of concerns that we’re interested in hearing about is the risks and benefits of this system, with Administrator Davidson already, you know, providing a bit of this. But one of the big questions around here is related to national security. The United States federal government has now passed export controls on the sale of advanced semiconductor technology to China relevant for training large AI models. But giving away the model weights might, you know, give in software what we are preventing China from doing with the hardware that we are no longer willing to sell. And that’s, of course, an open question.

But I do want to understand these national security concerns. And so I’d like to go to you, Steve – one of our two Steves on this panel – (laughter) – because of your background not too long ago on the White House National Security Council. So how do you see the tradeoffs related to national security on this topic of open-weight foundation models?

Mr. Kelly: Sure. I think let’s start with the term “dual use.” And for those that aren’t familiar, that’s a term that’s used very frequently for technologies that have an application for civil purposes and also have military applications, and therefore might be subject to export controls and other restrictions to make sure that we’re not aiding inadvertently hostile foreign armies to use our technology against us. And so by implication, a foundation model that is so very capable, that can be used in so very many advanced ways, could be of use to a foreign adversary in their military and intelligence programs.

And so that’s kind of the invocation of that – of that term. There’s a whole lot of reasons why we might be concerned about a highly capable AI model being turned against us. One that may have been developed here and released into the wild. Certainly, a model that’s trained on biological data can help to perhaps solve cancer, but at the same time it can help to develop bioweapons. AI models can be used to enable cyber defenses, to make those more effective. But then, at the other hand, they can be used by, you know, foreign actors to conduct computer network operations and penetrate. So there’s – almost in every case there’s a flipside of the coin as to how technology can be used.

And these are just so very capable and so very general purpose that it’s hard to even imagine the applications. Certainly, one thing that we’re concerned about is the way that these models can be used to enable surveillance programs – facial recognition, suppression of dissident viewpoints, minority populations. All sorts of scenarios where authoritarian states might be able to really ramp up their ability to conduct surveillance and oppress, or even take advantage of those capabilities abroad as well. So it kind of lays the groundwork for some of the things that we’re concerned about. But we’re absolutely interested as well about how we turn some of those risks into advantages on the defensive side.

Mr. Allen: And I do think your point about the generality of these systems is really important. You know, it used to be the case that AI was very application specific – the image recognition system that could recognize photos of your cat could not recognize photos of tanks in satellite imagery. And so the application-specificness of it really helped to isolate the military relevance of it. But with these general-purpose systems it’s a little bit more difficult to tease out these things. And then as you said, Travis, we’re not just concerned with what is the current state of this, but what is the frontier? What is the future going to be? And you have folks like Nvidia’s CEO Jensen Huang who’s saying that in five years we’re going to have AI systems that’s smarter than anybody on this stage. And so the government has to be thinking about that future as well.

I want to ask our other Steve to weigh in here. You’re fresh back from Trento and Verona, Italy, participating in the G-7. I don’t know if any of these issues came in here, but how do you assess the sort of national security concerns around open-weight models?

Mr. Lang: Well, I think – thank you. These specific issues were not a topic of conversation. But I think the more general conversation that we’ve been having on AI in the G-7 and other forums over – especially over the last year or so, do apply to the same risks and benefits that we’re talking about with these models. And really, the U.S. approach here, and the one that is shared by our G-7 partners, has been that we want to address these risks – national security risks, other risks, risks of bias, risks to privacy, risks of misinformation – because we need to do that in order to make sure that we can have confidence in our ability to realize the benefits.

And through the G-7, and in other venues as well like the U.N. General Assembly – where just today the resolution on AI that the U.S. proposed was passed with, I think, well, more than 100 cosponsors. I think we got more than half of U.N. General Assembly members to express support – or, to cosponsor. The approach has been to endorse a shared commitment to using AI to address global challenges, whether it’s climate, or health, or advancing education and digital transformation, help speed progress towards the Sustainable Development Goals. Our shared commitment to do that together with AI. And at the same time that we are addressing these risks that we’re talking about.

And through the G-7 we have a number of different tools that we have agreed on that can help us do that, a code of conduct, a statement of principles. And we agreed on some additional measures just this last week to do things like develop a monitoring mechanism for the code of conduct, a report on adoption by micro, small, and medium-sized enterprises, as well as a toolkit for AI use by the public sector. So all of these are part of an effort to define ways that we can use AI to advance human progress, achieve human-centered goals, at the same time that we are addressing these risks that apply to dual – widely used – widely available – (laughter) – you’re going to have to –

Mr. Hall: It rolls off the tongue, yes. (Laughter.)

Mr. Lang: Yes. (Laughs.) This special category that we’re talking about which, as you mentioned, is a relatively limited subset, but apply also to all the other different tools that are available.

Mr. Allen: Great. I saw you wanted to add in here?

Mr. Kelly: So just before we move off of this topic, I know we started – we started off with the national security risks. But IST, with a number of contributors from across the AI labs, academia, civil society – joined us in a study. And we published a fairly significant report that’s exactly on this topic, of what are the risks of open access? And so we ran through this. Created a risk matrix on seven levels of access, from fully closed to fully open, API access, downloadable – a number of different levels. We identified six specific risk areas. And malicious use is one of them. Reinforcing bias is one of them. And tried to map out how those risks are impacted by this openness question.

And not all of them went up in terms of risks. There’s one that actually went down. So check out that report. But that was not on model weight release specifically. It was on the broader – the broader topic. But that working group is getting back together, they’re the meeting right now – well, not at this moment.

Mr. Allen: You’re skipping it. (Laughs.)

Mr. Kelly: But we’re in phase two to begin to look at those risks and find mitigations and other approaches for managing the risks. So we look forward to publishing that over the summer.

Mr. Allen : Great.

Aviya, I’d love to get your thoughts on this topic, related to the national security risks or any of the risks that we’ve heard about today.

Aviya Skowron: So on the question of risks, to build on what – on Alan’s opening remarks, what worries me when we get into this conversation is bundling all the risks together. I think, in order to sort of have a grounded policy discussion, we need to analyze risk by risk. Because, unfortunately, they all have different mitigations. They have different threat models. They have different baseline risks. You know, kind of what’s out there on the internet, what can an actor do without any use of AI? So that’s sort of my preliminary remark for all of these sorts of conversations. I really think we need to be very, very concrete when we start getting into the risk discussions, about what risk means specifically.

Mr. Allen: And then as an organization, you know, actively engaged in open-source AI research and development, could you just talk a little bit about how your organization approaches risks? And you said they’re all different, so feel free to just pick an individual example, and what the – what the risk mitigations exist today.

Aviya Skowron: Yeah. So from our perspective, when we train sort of a foundation model what we do is look very carefully at a training data set. Fundamentally, a large language model, what it can do depends on training data set. So we work very carefully to make sure that there isn’t particularly objectionable stuff in it. And we’ve seen some incidents, especially around 2021-2022, I would say, you know, of sort of maybe not malicious, but ill-advised uses of our models, because we released them open source. So there are no – sort of no restrictions on use. That’s, like, freedom zero under an open-source license.

And I would say, in those – in those situations it was either the platform acting – so, for example, Hugging Face taking down the model, social ridicule critiquing – (laughs) – the developers and making it very, very clear that we do not approve of what you’re doing. So these are sort of the incidents that we’ve seen on what we’ve been doing.

Mr. Allen: Great. So the United States is obviously the global leader in AI technology. But we’re the – we’re not the only actor in this story. And so I’m curious, you know, what you see in the wider international picture in terms of partners, or competitors, or adversaries, you know, and what they’re thinking about is vis-à-vis open source technology. And you said that you’ve already been reading some of the comments as they’ve come in, and already had conversations with a wide diversity of stakeholders. So, you know, to the extent that you’re in a position to sort of summarize what you’ve heard so far, Travis, I’d love to hear from you.

Mr. Hall: Yeah. So our assignment came, you know, when the executive order came out. And we’ve been talking with lots of folks about these considerations. And about wrapping our head around the nuances of the issue, right? And in terms of, you know, specifically the international component that you – that you talked about, I mean, one of the things that we definitely have heard is that – in terms of any types of mitigations that the United States would be interested in putting in place, it would be difficult if not impossible to go it alone, right?

Mr. Allen: As in, if we were to ban open source and nobody else did, or something like that?

Mr. Hall: Or something – or even some of the other types of mitigations, right? And also that, you know, because we’re – I will simply note that our tasking is not just to mitigate the risk, but to maximize the benefits as well, right? And so that we do – that if we are going forward and are actually supporting openness, right, if others are starting to try to shut things down and to put in place restrictions, those restrictions can undermine some of our policy goals in terms of – like, in terms of actually trying to keep things open, because that – the liability that you could have if, say, you know, the European Union or someone else were to really come down hard on anyone who had widely available model weights.

Like, that can put a chilling effect on many of the benefits that we would be seeing, right, in terms of innovation, in terms of competition, in terms of the ability for folks to do research. And so I do think that – again, with our tasking being, you know, the mitigation of risk, but then also this – you know, this thinking about and a consideration of how to maximize benefits and maximize innovation, that there really does need to be an international approach, particularly among likeminded countries.

Mr. Allen: Steve, did you want to add anything here, as our resident diplomat?

Mr. Lang: Yeah. I’d just like to foot-stomp that, and say we have very consciously, as the U.S. government, especially through the G-7 in the Hiroshima AI Process – started with a relatively small group of likeminded countries to develop a shared approach based on our shared democratic values and a rights-respecting approach to technology, and then built upon that more broadly. Because we do want to make sure that we have a shared commitment in the international community to mitigate the risks, while at the same time – which enables us to then realize the benefits and maintain an environment that encourages innovation. So that – G-7 was the initial focus for our efforts, but we have since expanded through other processes, and most recently with the U.N. General Assembly resolution I just mentioned.

Mr. Allen: Yeah. And I think that work with the G-7 is off to a terrific start. I mean, it really had an incredible run over the past 18 months or so. But, of course, it’s not just the G-7 that’s a part of this story. And some of the actors, you know, are not necessarily the usual suspects. You know, the UAE, for example, has funded the development of a large language model, open source – at least, purportedly – going under an open-source model. And so these are the kinds of things that, as Travis said, you know, we have to keep in mind. If we do it, what is the rest of the world going to do? Because the AI market is, in many ways, global.

So now I’d like to ask Aviya again, you know, as you’re, as you’re hearing about these considerations of what’s going on in the rest of the world – whether that’s the decision that the United States is going to make after the incredible responses that are going to come from this request for comment process. You know, how does the uncertainty of what the regulation might look like affect you? And then how have some of the decisions that have already been made, such as what’s going on in the EU AI Act, affect your organization?

Aviya Skowron: So I think uncertainty – so, first of all, you have to understand that, like, open-source developers, they don’t have policy teams, OK? (Laughter.) They don’t know what’s happening.

Mr. Allen: You’re, like, the one.

Aviya Skowron: Yeah. (Laughter.) I’m an anomaly. Most people really are not, I would say – you know, like, an open-source developer is not paying much attention to Washington, I’m sorry to report. (Laughs.) The uncertainty creates a sort of lack of faith in the ecosystem. And I think, unfortunately, a bit undermines, like, the openness and enthusiasm for research that otherwise we’ve seen. For example, you know, even just like two years ago.

On the international point, so I – you know, we pay a lot of attention to sort of, like, open-weight releases wherever they happen. So that includes UAE. So that’s Falcon, 180B, their large language model, but also China. And here’s the point I would really like to impress upon everyone: At the AI safety summit in U.K. in November, the foreign delegations during the keynotes all shouted-out open source and open collaboration, except for the U.S. and U.K. There are multiple countries that have, to some extent, you know, already committed to some sort of openness in their approach. That includes France. But this also includes China.

And what I worry about is we sort of lose the momentum behind our open-source ecosystem, and, like, a Ph.D. student who needs the open-source model, because they can’t afford, you know, API access from a company, now turns to, you know, a model that maybe does not align with our values. Maybe in the user agreement it says something that we, from our Western point of view, really don’t like. And, like, this is not a hypothetical. Like, I’ve read through those user agreements. I’m like, I don’t like that you don’t want me to generate these types of outputs.

So this is kind of the worry that I have. You know, that we’re just going to lose our – like, the United States is in a really strong position in AI. And that has so many benefits in terms of setting standards, you know, setting the tone of the discussion. Everyone’s turning to the United States when it comes to what are we going to do with regards to regulation. And I don’t want us to sort of lose this position because we clamp down on research, and on sharing models, and on collaboration.

Mr. Allen: Great. So we’ve heard a little bit about, you know, what the existing debate is. And this debate is obviously going to play out to an even more extreme form, a more extended form in the request for comment that you’re going to receive. But, Travis, could you walk us through sort of what is the process from now to here? So you’ve got a request for comment processes that’s going to close relatively soon. And then what’s going to happen after that? What’s the next stage of rulemaking, or whatever comes next?

Mr. Hall: Sure. So that goes a little bit into who NTIA is and what our authorities are. And so – and I want to, you know, take a quick little detour to why I think that we got this assignment. Which is that – you know, so NTIA, we are not a regulator. We don’t do rulemakings. I have been informed that there’s some small smidgen of spectrum management that could be said to be regulation. (Laughter.) You know, but overall, particularly my office, we’re not regulators. We’re a policy shop. And we’re policy advisors.

What we do is we give advice. We think about the issues. We try to be as thoughtful as we can. And we put out, both internally in terms of interagency comments and processes and in terms of the reports that we put out in the public engagement that we do. We are kind of a little bit like the tech and telecom think tank within the administration. And so we are going to, per the executive order, put out this report, put it forward to the president, with advice. And it is up to the president and, you know, the agencies with rulemaking power, the agencies with actual authority, to decide whether to act on it, how to act on it, or how to – how to how to move it forward.

That being said, you know, these reports go through interagency processes. So anything that we say would probably be agreed on by the interagency in terms of what the steps forward are, you know, what the – what the advice is. But that is – but that is the process. So we are doing the request for comment. We’ve been talking to a lot of people. We’ve been doing a lot of public engagement. We are very soon going to cocoon and, nose to the grindstone, and write this report and get it on the president’s desk. And then from there, it will move to –move to others.

But I do want to just take a note as to why I believe that we got this – why we got this assignment. Is that our mission is a little bit odd relative to other agencies’ missions. We don’t have the kind of mission that is, like, you know, we have a particular statute that we are effectuating, right? Our mission is a thriving digital economy, right? It’s a thriving digital ecosystem. And that means it’s thriving, not just for particular companies, but for users for, you know, researchers, for, you know, like, nonprofit organizations, like Wikimedia, who make the digital ecosystem a thriving place.

And so we do have that kind of, like, 360 view that we bring to bear on the issues that we’re thinking about. And so we are absolutely thinking about national security implications. But we’re also thinking about, you know, research, and innovation, and users, and benefits to users, and folks who might not be making a profit off of it but are excited tinkerers who make things better. And so we do – we do think about all those things. And I love that CDP’s mission kind of very much aligns with that as well, right, in terms of thinking about the human rights implications, as well as the national security implications, as well as economic implications. So it’s great to have the partners in State on that.

Mr. Allen: So you’re going to write a report.

Mr. Lang: Thank you. (Laughter.)

Mr. Allen: You’re going to write this report. Is there a deadline date wise the same way there’s a deadline for the request for comment?

Mr. Hall: Yes. Yes, 270 days. I believe it’s July 27th.

Mr. Allen: OK.

Mr. Hall: And we will not be missing that. So, again, we are very much in listening mode, very much want to hear from folks, both through the comment process, you know, our doors are open. I will say that following next week we will also be very much writing and working hard on not only getting the words on paper, but also getting it through the – kind of, like, the interagency processes and reviews that everything has to go through.

Mr. Allen: And so while this is going on in the United States process, you also have the EU AI Act, most notably, which – different parts of the EU AI Act take effect in different timeframes. But, for example, when it comes to general-purpose AI systems that could pose a systemic risk, which is the sort of closest analogy to what we’re describing here, the codes of practice, I believe, are required to be enforced in early next year. So early 2025 there will be regulations that actually apply to general-purpose AI systems that pose systemic risk. And, as I said a moment ago, open-source systems are not necessarily exempt from that regulation, even though they are exempt from other regulations in the EU AI Act.

Steve, I want to ask if you could, you know, react to something Aviya said a moment ago, which was that at the U.K. AI Safety Summit it was the case that the United States and the United Kingdom, in particular, sort of assumed the position of a little bit more skeptical of open source, in contrast to other countries. You know, number one, was that your sense of the debate and what’s going on, you know, as you as you talk to your colleagues around the world? And then, secondarily, sort of, what is your sense of the other actions that are being considered around the world, such as the EU AI Act or others?

Mr. Lang: Well, I’m not sure that I had the same reaction, but that doesn’t mean that I don’t think that’s a fair analysis of how some of the comments might have been viewed. I think the U.S. often takes a cautious approach on that type of topic because sometimes we have concerned about how they might be interpreted in terms of technology transfer – that would make our companies very vulnerable.

But with regard to the EU AI Act, I would say that I think the United States and the EU – we do share a lot of the same values and are approaching these issues from the same perspectives in many ways. And I think we share some of the objectives that the EU is trying to achieve through the AI Act. Our approaches will differ, of course. And I think, from our perspective, we’re watching very closely what happens with the AI Act and how it plays out to see what lessons we can learn from that.

Mr. Allen: And one of the things that the members of the G-7 committed to last year, related to AI, was to pursue not necessarily regulatory harmonization but sort of at a minimum regulatory interoperability. We sort of opened this discussion by talking about the very precise definitions that NTIA is going for. And it could be the case that the EU, you know, uses different definitions, or that Japan uses different definitions, which would obviously be confusing and complicated for companies and researchers. So could you just talk a little bit about what the work looks like to pursue interoperability, you know, now that the G-7 has committed to it?

Mr. Lang: Yeah, I think the important thing is that we – each country needs to be able to implement their own system, but we need to make sure that it’s as easy as possible for different – for companies to navigate those and to operate across borders without having to manage an overly complex set of requirements. And I think we’re particularly sensitive with how that can affect small and medium enterprises. So that has been a particular focus of the G-7 going forward. And, as I mentioned, one of our commitments coming out of Trento is that we are going to work on a report on micro, small, and medium enterprise adoption of AI that I think can really help inform our efforts to assess how we can make our systems more interoperable.

Mr. Allen: And when you succeed in this effort, we should look forward to some implementation plan of the EU AI Act using dual-use foundation models with widely available model weights – which, as you said, rolls off the tongue. (Laughter.)

Mr. Lang: Sounds good.

Mr. Allen: Great. So I’m curious, you know, to come to you, Steve, what we’ve heard about, you know, the different types of open-source systems and the types of risk mitigations that are available. Do you see a path forward for these dual-use systems with widely available model weights? Do you see mitigations that are sort of, in your mind at least – or in the national security community that you’re still a part of and were part of the government service not too long ago – are there obvious mitigations that that strike you as adequately addressing these risks?

Mr. Kelly: Well, some of that’s to be determined. That’s the phase of the study that we’re in now, looking at exactly that question. And we’re working diligently to try to meet some of the timelines –

Mr. Allen: You have six days. (Laughter.)

Mr. Kelly: Both – well, not for the comment period. But we’ll get it out there. And certainly in time, hopefully, to inform the U.K. process, the U.S. process, and others. And so we invite people to join us if you’re interested in this topic. But organizations like IST bringing stakeholders together and trying to pull this – the best together. And we don’t currently have a view one way or the other. We’ve identified the risks and we’re looking for ways to address those.

Certainly, there’s no question that when you go from a foundation model to a more tailored model, or smaller implementations that are – that are purpose-built, trained to a smaller – trained to on a certain, you know, narrow data set for a particular purpose, there’s much less risk there. And so I think in a lot of applications there are going to be more tailored uses. But in the big scape, that remains to be seen. And we’ll be putting that out as quickly as we can.

Mr. Allen: Great. And then, Aviya, the sort of same question. You know, to the extent that these risks, you know, strike you as compelling, are there obvious risk mitigations in the policy domain, as opposed to as an individual organization, research organization? Are there mitigations that strike you as more or less compelling?

Aviya Skowron: Yes. So there are a lot of things that can be done. But the crucial thing to understand is frequently they’re not at the model level. Like the model weights themselves are sort of the wrong point of focus, because a lot of the harm and risks arise during deployment, and due to particular use cases. Or when particular content, for example, is put in front of people. So when you look at research on misinformation and disinformation, specifically, there are many, many mitigations regarding how content spreads on the internet before you get to, oh, how do we get – you know, how do we make it such that large language models don’t produce misinformation? Because that’s technologically likely impossible.

There aren’t, you know, like truth bits on, like, the – you know, and untruth bits that we can determine and, like, arrange the correct tokens. But what we can do is invest in better moderation algorithms. For example, algorithms that don’t polarize people but, you know, instead look for – look for consensus. So yes. There are many mitigations, but they are not at the, like, you know, matrix of numbers and model weights, you know?

Mr. Kelly: Aviya, that’s a great point. And it reminded me of a concept that we – that we have talked about and built into our report, this idea of upstream risks and downstream risks. So there are some risks that are inherent to the models themselves and how they were developed, and other things which involves how they’re deployed, integrated, and used. And so some of the mitigations might be upstream mitigations and some of them may be downstream mitigations. And so, we’ve pointed to some of those in our previous report, and we will certainly be highlighting where we think a particular mitigation is going to be most relevant. And it may be the next step.

Mr. Allen: So, two of the risks that are certainly attracting a lot of attention on Capitol Hill right now related to – are related to biosecurity risks and cybersecurity risks. Steve, could you just talk a little bit about, like, what these upstream or downstream risk mitigations look like in either of those two cases? To the extent that you, you know – (laughter) – you’re ready to talk about it.

Mr. Kelly: We’re kind of getting ahead of where we are in the study. Aviya, do you want to weigh in on that?

Aviya Skowron: OK, OK, yeah.

Mr. Kelly: Well, you’re waving your hand. (Laughs.)

Aviya Skowron: Yeah. So biosecurity. My favorite part of the section four of the executive order is control and oversight on synthetic nucleic acid sequences – providers of those, yes. Because that’s sort of – that’s the right level of intervention. That’s where you can really make a change. And that’s how you really prevent a malicious actor from actually being able to access the resources they would need to perpetrate an attack.

Mr. Allen: So just walk this example through for folks in the audience who might not be familiar. You know, what a foundation model might do, hypothetically, is say: This sequence of DNA would create a highly contagious, highly lethal pathogen. But then somebody has to make that for you. And what you’re saying is that rather than regulate the AI system that comes up with the DNA sequence, we should be regulating the DNA synthesizer industry. And they should be refusing to manufacture, you know, DNA sequences that are – that have this sort of virulent or contagious capability. Is that a fair characterization where you’re talking about?

Aviya Skowron: Yeah. So, like, that’s sort of the idea, that that’s how we really prevent this bad scenario from happening. And instead, like, we are in the realm of, you know, trying to anticipate – you know, we’re talking about, like, variations of four letters and trying to decide, you know, how to prevent a model from spitting out a particular sequence, which is extremely difficult. On the – since you also mentioned cybersecurity – there’s this sort of offense/defense balance. Looks more – I mean, looks, hopefully, favorable. In the sense that, for example, when we get to automated vulnerability discovery, what can be used offensively can be used defensively as well, right? It just matters who gets to know the vulnerability.

And actually, machine learning systems are showing great promise in that – in that regard. So this is a great promise for people looking for vulnerabilities. Now it matters who’s looking for the vulnerability. So here, the intervention and sort of how we get on top of this is, you know, strengthening our cyber defensive measures and making sure that we have the talent and the, like, sort of capacity necessary, you know, to have people working on this.

Mr. Allen: Yeah. And I think it’s probably fair to say – and, Steve, you can correct me if I – if you disagree. But I think in conventional wisdom we’ve sort of been living in the offense-dominant cybersecurity paradigm for decades now, where the attacker usually has the advantage if they have the expertise, if they have the time and willingness to try and penetrate a computer network. And your hope, it sounds like, is that perhaps, if foundation models sort of achieved their promise, we could move to an era of defense dominance, which, you know, given that the United States has so much of its economy hooked up to digital infrastructure, it would be nice if we were in a more defensive-dominant paradigm.

Aviya Skowron: You know, and this sort of is building on efforts, such as those led by CISA, you know, to just strengthen our cyber infrastructure in general. Because all of these are obviously, you know, interrelated problems.

Mr. Allen: Great. So the next stages of this process are obviously you’re going to move into some kind of cave. (Laughter.) That, you know, prevents the telecommunications from accessing you, so that you can your – write your paper? What would be your advice to the folks who are out there who are hoping to write a comment on this issue? You know, what makes for a useful comment to you? What makes for a not-so-useful comment to you?

Mr. Hall: So I would say that it’s – from my part, there’s two things that would be really illuminating and helpful, right? The first is this conversation of benefits and risks, right? Like, actually – like, especially in terms of our very specific writ. Looking at what the baseline is, right? Like, what is the benchmark for non-dual-use foundation models? What is the benchmark for closed systems, right? Or systems that are open, but maybe the model weights aren’t available? As opposed to very specific risks that are of these – the dual-use foundation models with the widely available model weights.

I think that that – that that kind of – that conversation, such that we can have a focus in, you know, to actually talk about the risks that we really do – that are apparent in these particular scenarios, that we are focused on those risks and not the general, like, risks that are present in all systems, right? Or, would be present in closed systems anyway.

Mr. Allen: This is what Administrator Davidson called the marginal or competitive risk.

Mr. Hall: Exactly. Exactly. And the second thing that we’re just – and I know that, you know, Steve is working on it – is that question of, like, well, what should we do, right? Because ultimately, that is what we are – that is when we were putting forward to the president. That is the purpose of this report. It’s fine to map out these risks, but what we really need to be able to say is what are some actual policy – what are the policies that should be in place, right? And we – I will say, we do not have a predetermined outcome, right? Like we – like, certainly, you know, there are lots of different equities at play here. But we are very, very open to hearing the wide range of things that can be done, and/or should be done, in order to, again, mitigate risk as much as possible while maximizing, you know, the benefits of openness as much as possible.

And so if we’re able to get those two things – like, right, what is the specific risks that we should be addressing and how do we mitigate those in a way that maintains, you know, the innovation, and, you know, the openness, and having, you know, U.S. leadership? Like, what should we be doing, how can we be doing that, right? And we heard some of that today, in terms of, like, it’s not about the model weights, right? It’s about all of these downstream mitigations and things that we should be focusing on.

And I’ll just simply say that some of those things are really hard. Content moderation is very hard. Like, there are tricky wickets as well. But that, you know, that that’s the kind of – that this kind of, like, conversation around what we should be doing is exactly the kind of thing that we need to hear, because that’s what we need to be putting forward.

Mr. Allen: And the realm of future scenarios that folks are speculating on here is incredibly broad. You have – you know, just to talk about the pace of technological progress, you have some folks who think that large language models as an existing technological paradigm are going to plateau quite hard, quite soon. That’s sort of one community of thought. And on the other side, you have folks like Jensen Huang, who says, you know, large language models are going to take us all the way to human-level intelligence in five years. And so there’s a really wide range of possible futures that serious people take seriously. And that’s true both on the pace of technological evolution and on the range of risks that we could be facing.

So for those who are interested in submitting a comment, could you talk about, in a world where the realm of discussion is so wide, what types – and then necessarily we’re speculating a lot about technological evolutionary paths – what, to you, constitutes compelling evidence in these kind of speculative scenarios? How would you – you know, what types of things would be weighed more heavily in your mind? To the extent that you can say how you’re going to feel in six days when you’re reading all these comments.

Mr. Hall: Yeah, no, it’s a really – it’s a really good question. I do think that we are, of course, concerned about, you know, things that we don’t know, right? Risks that we are not able to gauge. But that’s true of a lot of technology too, right? Like, I will say that there is a little bit of that uncertainty in a lot of different technological spaces. What is the – what is the good evidence? Well, I think that, again, the question would be, like, are you pairing – like, I would say that speculative harm much more so than real, like, actual harm that we’re seeing right now – I would hope to see real mitigations that are – like, that that do address those risks, without completely, you know, destroying innovation, destroying, like, what we’re – what we’re working towards, and what we’re – like, again, U.S. leadership in the space, like all the positive uses, all the things that we’re trying to get to.

So, you know, I would – I would want to be able to see what the – what mitigations can, should be put in place, and at what phase, and in what way. What some of the potential triggers are, right? Like what’s – like, what capabilities are we concerned about? What are, like, things that can be, you know, thought through and monitored? And I will just simply agree that this process has been interesting because it’s one where technical experts don’t agree, right? Where, like, a lot of times, you know, you end up having normative disagreements between folks within industry. You know, and in this one you actually do end up having actual technical experts –people who are truly, you know, arms deep in this stuff – like, disagreeing about current capabilities and future capabilities.

And so I do think that this is one of those ones where there is going to need to be some normative assessments. But that what – again, what I feel like our real tasking is, is thinking through what can and should be done. And so, again, I would like to see that tied as much as possible to the specific risks, per Aviya’s comment. And so in as much as there’s speculative risk, I’d like to see those mitigations, like, you know, tailored to that.

Mr. Allen: Great. And, as you said, you know, the – as you finish writing your report, it’s going to go into an interagency process. So Steve, the State Department, will have a chance to weigh in, all these others will. But at the end of that story is going to be a recommendation to the president as to what to do in this regard.

Well, we’re all waiting with bated breath. And we encourage everybody who’s out watching to consider writing a comment. This concludes our panel. If you could all join me in thanking our terrific panelists. (Applause.)

 (END.)