The State of DOD AI and Autonomy Policy

Available Downloads

This transcript is from a CSIS event hosted on January 9, 2024. Watch the full video here.

Gregory C. Allen: Good morning. I’m Gregory Allen, the director of the Wadhwani Center for AI and Advanced Technologies here at the Center for Strategic and International Studies.

Today we have an event discussing DOD AI and autonomy with the Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities, Dr. Michael Horowitz. Michael, thanks so much for coming.

Dr. Michael C. Horowitz: Thank you for having me.

Mr. Allen: Well, your position is a new position. You joined the DOD in April 2022, and now your new role is merging two offices that had previously been separate. So you’ve got the emerging technologies portfolio and then also the force development portfolio. Could you explain to me a little bit about what each of these roles entails and why the Department of Defense decided to merge them into a single office?

Dr. Horowitz: Sure. And thanks again for having me.

The original role I was brought to the Pentagon to do surrounded emerging capabilities policy, with the idea being that within OSD Policy – which reports up to the Secretary of Defense – that we wanted policy to have a stronger view in accelerating emerging capability adoption. So then-Undersecretary Kahl created the Emerging Capability Policies Office to bring together some of the work that we were doing from a policy perspective on AI and autonomy, directed energy, biotechnology, hypersonics, etc. And we then realized that a lot of what we were doing was trying to figure out how the future force could more effectively incorporate these kinds of technologies in a safe and responsible way. So a decision was made to combine the Force Development Office – which leads Policy’s participation in the DOD budget process, the Defense Planning Guidance, a bunch of those things – to merge that with the Emerging Capabilities Office. And what that means is that within DOD the same office in OSD Policy that’s thinking about AI policy, that’s thinking about hypersonics policy, directed energy policy, is also the office plugging into the defense budget process and working on defense planning guidance surrounding the future budget. And so that really shows that we’re not just thinking about – emerging capabilities are not, you know, off in the side as a niche area; it shows the way that we’ve brought them really into the center of how we think about planning for what the future of the military should look like.

Mr. Allen: Well, I think that’s a perfectly reasonable development. And it’s always wonderful to actually see the DOD’s policy posture match its budget posture, and the disconnect between those two things can get super broken. So it sounds like you’ve now unified these two roles and the one hand is the other hand, so hopefully moving in sync.

There’s other parts of the DOD ecosystem that have, you know, many hands in making AI and autonomy happen at the DOD. I’m curious specifically about the Office of the Chief Digital and Artificial Intelligence Officer, the CDAO organization. What’s the sort of division of labor between your shop and their shop? Because, obviously, you’re both within the Office of the Secretary of Defense and both have various things that you’re describing as strategy or policy.

Dr. Horowitz: No, it’s a great – it’s a great question. And we partner closely with CDAO, Policy leadership with Craig Martell and other leaders in CDAO, and we’ve really worked closely with them.

One way to think about the division of labor here is that CDAO is really in charge of accelerating AI adoption throughout the department; so figuring out, when it comes to actually doing AI in DOD: How do we do that more effectively? How can we, you know, bring things that different services are doing together? How can we – can we generate more efficiencies from that perspective? How can we accelerate – how can we accelerate adoption?

When it comes to what are we thinking about AI capabilities for, how are these aligned to the National Defense Strategy, how are we postured and thinking about how this relates to what the – what the rest of the U.S. government is doing, that’s really where policy plays a – where policy plays a leading role. What it ends up meaning is in a lot of cases we’re left seat, right seat with CDAO on particular policy issues, but there’s certainly places where we’re there in the lead, and should be, specifically when it comes to how it is that we accelerate doing AI effectively in the department.

Mr. Allen: This is great. And I want to talk a little bit more about what doing AI effectively in the department will entail.

But before we go there, I want to connect a thread to your prior academic work as a professor at the University of Pennsylvania and other institutions. Back in 2010, you wrote a book called “The Diffusion of Military Power” and came up with this concept called adoption capacity, which relates to the ability of militaries to actually put new technologies in place, put new tactics in place. So I’m curious if you could sort of just connect your academic work to your work in the DOD here for a moment. What is the sort of current state of the adoption capacity of DOD with respect to AI and autonomy?

Dr. Horowitz: I think that the – I think the adoption capacity of the department when it comes to AI and autonomy is improving, but we have more work to do, frankly, as we’ve been very public in stating. The new strategy that CDAO released, I think, takes a – takes a good shot at updating – at updating the prior – the prior strategy from 2018 and signaling the way that we want that sort of DevSecOps perspective infused in how we’re thinking about AI adoption.

So I think from a – so thinking about this from the perspective of my academic work, I would say we are increasing our RDT&E and experimentation investments in AI and autonomy, which my academic work would suggest is a good idea for building – for building adoption capacity. We’re institutionalizing. We’re, you know, making moves in the organization that are – have been disruptive but are important for prioritizing AI and autonomy. I think the creation of CDAO is a good example of that, I think, given the work that they’re doing in bringing in commercial technology. Having DIU now report to the secretary of defense is a good move in that way. There are a whole bunch of things, essentially, I think, that we’ve done.

If you imagine, essentially, a continuum of activities from science and technology investments all the way to fielding capabilities, this administration within the Department of Defense has launched a new initiative in each place, essentially, in the continuum. The FY ’23 budget was the largest RDT&E budget in history. We’ve created organizations like the Office of Strategic Capital to better work with – work with companies, including potentially in the AI and autonomy space. We’ve created institutions like CDAO and through the elevation of DIU to ensure these things are – to ensure these things are prioritized more. We’ve created the RDER Initiative for rapid experimentation designed to try to – to try to move past, you know, one of the sort of several valleys of death. All the way through, then, like, our acquisition and sustainment organization in the acquisition pathfinders that they run to try to figure out: How can we better get after what adoption looks like when we’re talking about software rather than hardware, when we’re talking about maybe AI-enabled capabilities that don’t look like things that maybe we did 40 years ago? And so all across the waterfront we’ve launched initiatives designed to improve our adoption capacity, and I think we’re really starting to see them pay off.

Mr. Allen: That’s terrific.

And so now I want to talk a little bit about this, you know, force that we are going to develop when we have adopted these technologies. When you think about AI and autonomy, not just the sort of current state of the art as it exists in commercial industry and in some militaries around the world, what are we building towards? What does the future force need to look like to have effectively adopted these technologies?

Dr. Horowitz: Sure. I mean, you know, I think about AI as a general purpose technology. So it’s not – you know, it’s not a specific widget; it’s something, you know, more like electricity, more like the combustion engine, to go back to my academic roots a little bit in thinking about, you know, how we characterize different types of technology. And if you imagine AI as a general purpose technology, then there are lots of different kinds of use cases.

And I think one of the challenges when we talk about – when we talk about AI is, you know, like, all right, what are we talking about? The reality is that what we’re talking about, much of the – many of the AI uses are back-office uses. They are things like – you know, like, you may have seen an announcement that the Army, as part of how it’s thinking about some, like, aggregating information for promotion boards, is trying to use AI to accelerate that process, to decrease the number of hours individual people have to take in initially sort of aggregating files together. So this means that more person time can be spent doing the thing that the Army should be doing, which is focusing on how do we, you know, promote the most talented, you know, officers to future – to future roles. There are lots of those kinds – there are also lots of back-office roles where we imagine employing AI. And you know, and then, you know, move closer to the battlefield, of course, there are, you know, uses of AI in intelligence, surveillance, and reconnaissance – in, you know, what we call ISR; there are uses of AI when it comes to bringing information together for senior leaders to sort of buy them back decision space in time-compressed environments. And so when we – there are lots of different kind of applications, and that’s not even getting into the autonomy space where we think about the deployment of different kinds of autonomous systems that might have, you know, different levels of autonomy in which increasingly, you know, that autonomy is AI-enabled.

Mr. Allen: That’s great.

And let’s actually go there on autonomy, because, you know, one vision of the future force was articulated by Deputy Secretary Kathleen Hicks, who sort of stated that the future fighting force would include a lot more autonomous systems; that these systems would be attritable, meaning that their cost structure is such that you can actually afford to lose these assets; and that they would be deployed in extraordinary mass – you know, thousands or tens of thousands of units. What is the Force Development Office doing and what is your involvement in sort of making this Replicator vision of future military power occur?

Dr. Horowitz: Sure. That’s a great question. Let me – let me start with how we’re thinking about the Replicator Initiative, since I think the Replicator Initiative is a great way – a great illustration of how the department’s getting after some of these innovation challenges and how the deputy secretary in particular is really laser-focused on accelerating innovation adoption.

And so the Replicator Initiative is about a process as much as anything else. It’s about showing we can do hard things – that we can develop and especially accelerate the fielding of capabilities at speed and at scale. And that involves process changes, you know, within the – within the department. And then we had to decide: What do we want as an illustration of that? What should be the first thing that we tackle in the context of Replicator? And the deputy decided, wisely in my view, to select attritable autonomous systems.

So, then, why attritable autonomy? You could imagine attritable autonomy is important for several reasons. One, we’ve seen the – we see, you know, massive advances in AI and autonomy in the commercial sector, which suggests that there is a lot of potential for these kinds of capabilities. The second is we see in conflicts all over the world, you know, including, you know, Ukraine’s response to the Russian invasion, the way that advanced – the way that attritable capabilities are increasingly important. And that – a bigger picture way to say this is that we used to think about either you have precision or you have mass. That’s no longer the case. What we need in many instances is going to be precise mass. That’s where the notion of attritable autonomy comes in.

And so the goal of the Replicator Initiative is that we can field in the multiple thousands attritable autonomous systems in the next 18 to 24 months. And as the – as the deputy’s been – you know, has been out talking about this initiative, we’re on track to achieve that goal. It feels, though, we’re really making moves in that – in that category.

And so what Policy’s role in that process is a couple-fold.

You know, the first is ensuring that the capabilities that are selected for inclusion under that, you know, all-domain attritable autonomy, you know, window, or ADA2, are aligned with our National Defense Strategy, since, you know, the National Defense Strategy makes clear that the pacing challenge that we’re really focused on is the People’s Republic of China and in managing the multidomain military competition with the People’s Republic of China. So we want to ensure that the capabilities that we’re pushing forward in the context of the Replicator Initiative are focused on the – you know, are focused on that NDS-related – NDS-related pacing challenge.

Second is, given the role that we play in the context of the – of the budget, the role that we play in the context of defense planning, you know, working with the rest of the department to try to identify what are the key capabilities. And we’re doing this under the leadership of DIU, which has been tapped by the deputy to lead the Replicator Initiative. And you know, Doug Beck, who’s the new-ish, I guess at this point, head of DIU, is a tremendous leader who’s really focused on this challenge and on ensuring that the Replicator Initiative succeeds.

Mr. Allen: So just to make sure I understood you correctly, we should expect to see the Replicator Initiative and the ideas underpinning the Replicator Initiative reflected in the upcoming Defense Planning Guidance.

Dr. Horowitz: I think the – I mean, the Defense Planning Guidance is classified, so the – but the –

Mr. Allen: I won’t see it, but within the department.

Dr. Horowitz: But I think that the ideas – and again, I mean, there are two different things that I want to make sure to distinguish. Replicator itself is about a process. It’s about figuring out how it is that we can rapidly field – rapidly field at speed – how we can field at speed and scale key capabilities that we view as important, given the National Defense Strategy. The first bet – the first, you know, initiative under Replicator is focused on all-domain – all-domain attritable autonomy. So I think that these are – these are areas that we’ve been investing in, that we will continue to invest in. And the Replicator Initiative is about trying to accelerate our ability to field these capabilities in a way that can make a difference for the war fighter.

Mr. Allen: Great. So now I want to shift gears a little bit and talk about some of your other responsibilities and some of the other work that you’ve done over the past year and a half. Specifically, I want to talk about Department of Defense Directive 3000.09. So this is the DOD policy on autonomy in weapons systems. It’s more than a decade old, but was recently updated. And so I want to ask you a little bit about what you were trying to achieve, you know, with this update, and what was the department’s reaction?

Dr. Horowitz: So, you know this issue well, you know, Greg. And you’ve written about it persuasively, I might add, you know, on the outside, in addition to your experience in the department. And, you know, as you said, the directive was a decade old. And frankly, a lot had changed in the last decade. You know, since the original directive was published, you had this massive explosion of AI-enabled capabilities happening in the commercial sector. You had advancing interest in these capabilities in DOD. The publication of our of our first AI strategy, and then work on our – work on a second one had already started. The development of a responsible AI strategy and implementation pathway. The creation of CDAO. Lots of different things.

And that’s mostly just things inside the department, to say nothing of the role – the critical role that we think autonomy might play in the in the future of war. And that if you – if you think about – and again, if you – if you look at the context of Ukraine and in a lot of the sort of articles you see out there about jamming, about electronic warfare, about all the different kinds of – the cat and mouse game that Ukraine and Russia are constantly, you know, playing with each other, autonomy is one of the ways, you know, that a military might seek to address some of the – some of those challenges.

Mr. Allen: Because if you’re – if you’re remotely piloting something, but the communications link is jammed, then either the asset is useless or it’s autonomous. Those are kind of the choices that that an operator will face.

Dr. Horowitz: Exactly.

Mr. Allen: Yeah.

Dr. Horowitz: So the goal behind revising the directive was to – was, frankly, like mostly a good – was, frankly, mostly a good governance move. Here’s what I mean by that. We had ended up in a situation where, outside the department, the community of interest thought that DOD was maybe building killer robots in the basement. And inside the department, there was a lot of confusion over what the directive actually said, with some actually thinking the directive prohibited the development of autonomous weapon systems with maybe particular characteristics or even in general.

So what we wanted to do with the revision to the directive is make clear what is and isn’t allowed in the in the context of DOD policy surrounding autonomy and weapon systems. And just to be clear about that, the directive does not prohibit the development of any systems. All it requires is that for autonomous weapons systems, unless they fit a specific exempted category like, say, protecting a U.S. base from a – from lots of simultaneous missile strikes, that the – it has to go through a senior review process where senior leaders in the department take an extra look, in addition to all of the other testing and evaluation, V&V, and other requirements that we have whenever we develop and deploy weapon systems. We add, essentially, an additional one on top for certain types of autonomy – of certain types of autonomous weapon systems, to ensure that we can develop and deploy these systems, if needed, in a safe and responsible way.

Mr. Allen: Yeah. And I think the policy refers to it as the senior review process, which I feel like is almost underselling it. It’s, like, the astonishingly senior review process. It’s like most of the heavy hitters in the department have to approve the system both in development and then again in fielding.

Dr. Horowitz: Right. You have multiple undersecretaries and the – and the – and the vice chair have to both approve an autonomous weapon system prior to development and approve it prior to fielding. That is a – I mean, it’s like really is, like, you know, like, internal like DOD politics. But that that’s a huge – it’s a huge bureaucratic lift, frankly, to do something like that. But reflects how seriously we take our responsibility when it comes to ensuring that any autonomous weapon systems that are fielded, that we can be confident that they’re safe. Which, frankly, is in our interest as much as anybody else’s, more than anybody else’s. Nobody wants their weapon systems to work more than the Department of Defense.

A weapon system that isn’t safe, that isn’t predictable, doesn’t work. It’s not useful. And when it comes to developing and deploying the capabilities that the joint force needs to deter war and, if necessary, prevail if conflict occurs, we need to have trust and confidence in our systems. And so what the directive is designed to do is ensure that in this sort of emerging technology category of the integration of autonomy in weapons systems, that when it comes to autonomous weapons systems that we can have that kind of trust and confidence.

Mr. Allen: So we’ve got this vision that, of course, there are evil ways to envision using autonomous weapons systems, but that’s true of all weapons systems. And so there’s also the possibility that we can use this in full accordance with the laws of war, the obligations under the U.S. legal code. And I think the way I often think of 3000.09 is it says these weapons are not inherently banned, but they are subject to additional technical scrutiny, which is already, you know, pretty substantial for any weapons system, and then additional procedural scrutiny by the highest levels of leadership in the entire department. Is that sort of a fair summary of the thought process?

Dr. Horowitz: More or less, except that I would say our commitment to international humanitarian law is ironclad. You know, all develop – all weapons systems that we field we believe can comply with international humanitarian law. That’s part of the legal requirement for the – for the development and fielding of a weapon system. And that commitment is no different when it comes to an autonomous weapon system than it is for anything else. So the – because, again, this gets back to being able to deploy a weapon system in a safe and responsible manner.

And we are committed to that, and focused on ensuring through these additional senior-level reviews – which include, again, some of the most senior people in the Department of Defense, that there’s visibility on these capabilities and which, frankly, benefits the department more than anybody else. And that you certainly wouldn’t want to have a system that was developed and fielded that, you know, a commander out on the battlefield doesn’t have trust and confidence in. All of this is designed to ensure that we have confidence in these systems, that we know that they’ll work, so that if we decide to field them that they’ll enhance the capacity of the joint force.

Mr. Allen: Terrific. And so the ideas and logic underpinning DOD 3000.09 are now also reflected in U.S. foreign policy through the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. So can you talk a little bit about this political declaration, which started earlier this year and now I feel like it’s gotten some real diplomatic momentum.

Dr. Horowitz: No, that’s great. I mean, the political declaration, I think, is a really clear demonstration of a through line in our commitment to responsible behavior. And that you can see it in policy through something like the Responsible AI Strategy and Implementation Pathway, or Duty Directive, 3000.09. You can see it in initiatives like Replicator. And you can also see it in foreign policy, in our attempt to build international cooperation on the military development and use of AI and autonomy. The political declaration creates strong norms of responsible behavior for AI and autonomy, designed to ensure – to try to help build cooperation between countries around the world, so that as not just the United States, but lots of countries develop and deploy in AI-enabled systems in the coming period, that they can do so responsibly.

Mr. Allen: And so this political declaration, which was sort of collaboration between the Department of Defense and the State Department, has now secured the signatures of, at last I counted, 47 other countries, have sort of agreed. Can you just give us a sense of what exactly are they signing up to when they – when they sign onto this political declaration?

Dr. Horowitz: Sure. So we’re up to 51 now.

Mr. Allen: Oh, wow.

Dr. Horowitz: Including the United States. And we’re proud of the fact that it’s not just the usual suspects. And that the – if you look at the – if you look at the pattern of countries that have endorsed the political declaration, of course, we’re grateful that our closest allies and partners are on board. Which is – you know, we – because, you know, we think very similarly about many of these issues. That’s why they’re – you know, we’re very close. But it’s not simply those countries. I think that there’s a recognition that the sorts of norms we’re trying to promote are things that all countries should be able to get behind.

And so they include things like a commitment to international humanitarian law. They include things like appropriate testing and evaluation for systems. They include – you know, they include lots of different, I would say, sort of good governance mechanisms of the sort that we flesh out in great detail in Department of Defense Policy. And it was really a collaboration, I think, throughout the U.S. government to produce this, which, I think, you know, stands alongside a lot of the other AI-related initiatives that the Biden administration has launched, both domestically and internationally, over the last – over the last several months, in particular.

 

Mr. Allen: So when I looked at the list of countries who had signed up to the political declaration, it did not include, notably, Russia and China. I’m assuming that hasn’t changed in the – in the shift from 47 to 51. What are the – what are the implications of the fact that Russia and China have not signed up to this? What does it suggest to you? And what does the United States hope for?

Dr. Horowitz: I think we believe that these norms of responsible behavior – because, again, you know, we think of this as good governance so countries can develop and deploy AI-enabled military systems safely, which is in everybody’s interest. You know, nobody wants, you know, systems that that increase the risk of miscalculation or that that behave in ways that you – behave in ways that you can’t predict. So the – we hope that all countries would sign on to the – sign onto the political declaration.

And I think the trends are in the right direction for seeing, you know, more and more – more and more actors sign on. We’re actually working toward a potential plenary session in the first half of 2024 with those states that have endorsed the political declaration. And we hope that even more will come on board, you know, before that happens, and will come on board afterwards. You know, and that includes – and that includes everyone.

Mr. Allen: So something that has sort of changed in the international ecosystem just in the past few years is we’ve now got, you know, multiple pretty substantial wars under – being fought. And the United States is not a participant, a direct participant, in these wars, but obviously cares a lot about the outcomes here. I’m thinking specifically about the conflict in Ukraine. And Russia has military systems that advertise the capability of AI-enabled offensive autonomous capabilities. A minister in the Ukrainian government talked about the logical and inevitable use of autonomous weapons.

I’m just curious, if we do see the use of AI-enabled offensive autonomous weapons, the types of weapons that are discussed in 3000.09 and the political declaration, what would it suggest to you if we started seeing these weapons being used in the conflict of Ukraine by Russia, by Ukraine, or anything else?

Dr. Horowitz: That is a great question. I would say two things. First, I’m not a lawyer. And so to some extent, I would push back a little bit on the – on speculating about the hypotheticals. I think that there’s a lot of – there’s a lot of – there’s a lot of noise, frankly, when it comes to the – when it comes to the connection between emerging technologies, especially AI, and autonomy, and military capabilities. I feel like if you – if you just set up, like, a Google Alert tracker or something, you know, every week, you know, you see new articles out there about, you know, claimed capabilities of systems. And sometimes it’s unclear, you know, what is real and what is – and what is hype.

That being said, I think that we have a lot of concerns about Russia’s compliance with international humanitarian law in the Ukrainian context. And I suspect that, you know, were Russia to deploy something along the lines of – something along the lines of an autonomous weapon system, that thus we might have concerns about that as well. But that would be probably not inherently because it was autonomous, but because we’re concerned with the way that Russia’s using technology in its invasion of Ukraine and the war in the first place.

Mr. Allen: Right. When you’re intentionally bombing hospitals with remotely piloted weapons, presumably the autonomous weapons will also, you know, not be afraid to do such civilian casualty inducing events.

Dr. Horowitz: And this gets back to something I said before, which is that from for the perspective of the United States, because our commitment to international humanitarian law and our own use of force is ironclad, the – whether a system is autonomous or not, you need to be able to use it – you know, you need to be able to follow the principles of international humanitarian law. And so we would – we would expect that other countries that have made similar commitments would also do that. We’ve had concerns about Russia and so we would probably have concerns if those systems were autonomous.

Now, again, putting that aside there is clear in the context of the – of Russia’s invasion of Ukraine and the war that has followed that it’s been – you’ve seen enormous advances and not necessarily the deployment of new systems but deployment of systems at scale and the rapid iteration of systems in a way that I think from – you know, putting on my, like, outside hat on from sort of like work that we’ve participated in the past that, you know, it’s not – it’s both not surprising in some ways that the technology is reaching this point.

But the – what we’re seeing in some ways is a culmination of lots of advances of capabilities that we saw essentially deployed at smaller scale in earlier conflicts like, say, the Armenia-Azerbaijan conflict that occurred a few years ago that now we see at scale in the Russia-Ukraine war.

Mr. Allen: One of the other topics that comes up a lot in conversations around national security, AI policy, is the intersection of AI and nuclear policy and this is something that various members in Congress have actually proposed explicitly banning through legislation that AI be in any way involved in nuclear command and control.

And I just wanted to, you know, ask what is the current U.S. policy on AI’s involvement with nuclear command and control or any other aspect of the nuclear issue?

Dr. Horowitz: Our policy is actually really clear on this. There have been statements you might recall over the years –

Mr. Allen: Including one by my former boss General Shanahan, yeah.

Dr. Horowitz: Exactly. From your former boss General Shanahan among them and, you know, other commanders who had essentially strongly implied that the U.S. is not going to, you know, rely on AI for nuclear command and control and that policy that had been strongly implied by multiple leaders including General Shanahan is now enshrined in Department of Defense policy in the context of the Nuclear Posture Review, and the Nuclear Posture Review, which is the definitive document that lays out our policies surrounding nuclear weapons systems, makes clear that there needs to be human involvement in that, you know, most important decision surrounding the employment of nuclear weapons and that is a commitment that the department has made, that we feel very comfortable making, and that we think should be very clear and that we would invite all nuclear-armed states to make.

Mr. Allen: So you said that we would invite all nuclear-armed states to adopt this sort of similar policy. Do you see any prospects for international dialogue on this issue?

Dr. Horowitz: In the context of the P-3 – of the United States, the U.K. and France – there’s a public statement already that was published I believe in summer of 2022 committing us, again, to human control and involvement over issues surrounding the employment of nuclear weapons.

We’ve invited other countries to make similar commitments and we hope that they will make those commitments.

Mr. Allen: Yeah. And just to be clear, you know, in either public or in a private setting that you’re actually authorized to discuss, you know, you had said previously the United States was implying our policy before we made it very clear in the Nuclear Posture Review.

Are you getting a sense of an implied policy or any other signals from, say, China or Russia?

Dr. Horowitz: I can’t comment on that at this point. I wouldn’t know the answer to that in particular. I would reiterate that we think that the decision to use nuclear weapons is so important that we think human involvement should be central. We would expect that other countries will – you know, would share that commitment and we hope that they would make that commitment explicit.

Mr. Allen: Great. So now shifting gears a little bit to the topic of risk reduction in other areas, nonnuclear areas perhaps, many have called for there to be some kind of risk reduction dialogue with China with respect to AI and military AI risk reduction.

Are these types of dialogues desirable? Are they likely?

Dr. Horowitz: I think the Department of Defense has been very clear that we welcome more military-to-military exchanges with the PRC. We think that that kind of – that that sort of communication is important for reducing the risk of miscalculation, of inadvertent behavior.

The more that we can talk and try to understand each other the better, from our perspective, and the Department of Defense has been very clear about that and I think you’ll recall that coming out of the meeting between President Biden and Xi in November that there was an agreement to resume military-to-military dialogue between the U.S. and the PRC, which the department strongly supports, you know, given our view that open dialogue is important.

There was also an agreement in the context of that Biden-Xi meeting to have conversations about AI safety and general AI capabilities, and we think that, again, dialogue between the U.S. and the PRC is – you know, is helpful and the – and wherever the substance of those conversation leads or focuses we think that’ll be a good thing.

Mr. Allen: Terrific. Now I want to stop talking about, you know, China and Russia but instead focus on allies and partners, and I think one thing that’s also of note is that your portfolio – correct me if I’m wrong – also includes the AUKUS initiative between Australia, the United Kingdom, and the United States.

And so I’m curious in the context of AI, autonomy, other emerging technologies, how is cooperation with allies different or not different from traditional systems, you know, in these emerging technology areas?

Dr. Horowitz: No, it’s a great question, and when we talked about the way that the office I’m privileged to run has evolved over the last year or two, the – it started with emerging capabilities and then we brought the force development component in the third wing of the office focuses on AUKUS, the Australia-U.K.-U.S. trilateral partnership that I think is best known for Australia – for the moves we are making to help Australia acquire a conventionally-armed nuclear-powered submarine capability but also includes a broad array of advanced capability cooperation including AI.

So we’ve spent a lot of time actually over the last year or so thinking about what AI cooperation with some of our closest allies and partners looks like and if you think back to the AUKUS defense ministerial that happened in early December – I’m sure you were following it very closely – where Secretary Austin and his counterparts announced a series of new initiatives that we’re using to accelerate capability cooperation and to get capabilities under the AUKUS umbrella in the hands of the warfighter.

And one of the things I think is really notable is how many of those involve AI and autonomy. We announced a maritime autonomy experimentation and exercise series where we’ll work together to try to accelerate moves on USVs, UUVs, et cetera.

We announced progress in anti-submarine warfare. For example, all three countries have P-8s. All three countries have sonobuoys. So our teams are working together on an algorithm that will let us gather data from U.K. and Australian sonobuoys and vice versa that we can all roll out on our P-8s. It’s a great example of trilateral collaboration.

We did a swarming demonstration together at Salisbury Plain in April 2023, and so I would say from an AUKUS perspective I think it’s an illustration of how the fact that AI maybe involved software rather than – I mean, there was certainly hardware – serious hardware involved in training algorithms but that the output is – software is not inherently a barrier to cooperation with allies and partners.

You know, that being said, when working on international agreements, project arrangements, you know, those sorts of things, the – and thinking about what those software-driven collaborations look like, you know, in many ways we’re taking, I think, lessons learned from the cyber arena and some of the work that the U.S. does with allies and partners in cyber and, you know, thinking about, you know, what are the best practices that we can – you know, that we can use and that’s a place where an organization like CDAO, you know, within the department can I think play, you know, a critical role in working with our acquisition community, working with our, you know, research and engineering community to try to ensure that, you know, we have all the tools that we need to be able to do those kinds of collaborations because in some ways there are reasons why it might be easier to do that kind of cooperation rather than harder.

It’s just a matter of getting things set up to do it together, and I think we’ve been really successful and will – and continue to be so in doing that in the AUKUS context.

Mr. Allen: I’m a little surprised to hear you say that it might be easier than the traditional types of systems. I mean, just my sort of first thought would be that, you know, ITAR and the other regulations that often relate to the transfer of military technologies were sort of written before the modern, you know, data-driven machine learning or DevSecOps software development paradigms.

But from your perspective, you know, you think that this can be done. It is being done.

Dr. Horowitz: I think we’re working – we’ve been working really hard under the existing system to get the authorities in place to be able to do this kind of cooperation in the AUKUS context.

I think it demonstrates that it’s possible. I think we also were really fortunate and grateful to Congress for the bipartisan support we’ve seen for AUKUS and for passing the legislative proposals surrounding AUKUS which include a, frankly, revolutionary change in the way that we will potentially do defense trade with Australia and the U.K., moving forward.

I mean, this involves the Department of State and Department of State authorities much more than Defense, but the legislation authorizes, if the secretary of state certifies that Australia and the U.K.’s export control systems are, you know, I would say colloquially like, roughly, equivalent to those of United States that we can do license-free defense trade then between the U.S. and U.K. and U.S.-Australia and trilaterally in a number of different areas which we think will make this kind of collaboration even easier, moving forward.

Mr. Allen: And license-free transfers is, you know, not the most eloquent slogan but it is a revolution really in defense trade in a lot of ways.

Dr. Horowitz: It’s like the nerdiest possible change.

Mr. Allen: Yeah. (Laughs.)

Dr. Horowitz: And I say this as, like, a huge nerd, obviously, in everything I work about – work on is super nerdy. But that might be the –

Mr. Allen: Yeah. But I do want to sort of say that there have been some acknowledged difficulties. I think the strategic logic of AUKUS is rock solid. The political coalition both domestically and internationally appears to be rock solid.

But you’re also dealing with the DOD bureaucracy, which is a formidable opponent as I learned during my time in the DOD, and AUKUS has now had, you know, these moments where the Australian ambassador to the United States has talked about the challenges.

I believe his words were the frozen middle of the DOD bureaucracy, and I’m curious, you know, whether you think that we’re making progress in addressing the concerns raised by the ambassador.

Dr. Horowitz: I think that the – I think each step of the way we’ve worked really hard to try to identify what are the potential barriers to collaboration, and if you look at what we’ve accomplished in the last couple of years it’s astonishing.

We’ve rolled out the optimal pathway for Australia’s acquisition of conventionally-armed nuclear-powered submarines. We worked with our partners on the Hill to pass legislation that not only potentially – that authorizes the potential transfer of Virginia-class submarines to Australia and enabled us to receive a substantial contribution from Australia to our submarine industrial base.

We got the export control legislation passed that we’ve already discussed. We’ve launched this maritime autonomy experimentation and exercise series. We’re, you know, working together in all of these ways.

One thing I didn’t mention was an innovation challenge that we’re launching led by DIU, the Defense Innovation Unit, where each country is going to launch the same innovation challenge focused on electronic warfare so we can all benefit from the research that all – that companies in all three countries are doing on that topic. That’ll be the first of several innovation challenges.

And so I think that the – I think what I see in DOD and in the interagency within the U.S. government on a day-to-day basis is a really strong commitment to making AUKUS succeed and to – when we identify those policy barriers to working hard to overcome them.

Mr. Allen: What I said in some discussions with my Australian colleagues was great, please complain, because it’s actually kind of useful in fighting the internal bureaucratic battles to say, hey, we need to do right by our partners here and live up to our diplomatic commitments. Well, Dr. Michael Horowitz, one of the original scholars of military innovation now in a position to actually practice what you preached and help the Department of Defense get right with emerging technologies including AI and autonomy, not just in the policies on paper but in the dollars that are reflected in the budget and the capabilities that we’re actually putting in the field, thank you so much for spending the morning with CSIS.

Dr. Horowitz: Thanks so much, Greg. It’s great to see you as always and always a pleasure to work with CSIS.

Mr. Allen: Great. Thank you, and this concludes our event. Thank you to everyone watching out there.

 (END.)