Report Launch: Seven Critical Technologies for Winning the Next War

Available Downloads

Emily Harding: Thanks for joining us today for a discussion on how the U.S. government should use technology in the era of great-power competition.

We have just completed a study that identified seven critical technologies that could very well make the difference in the entire spectrum of competition. These technologies can help us win the tech race; they can bolster deterrence; and if, God forbid, we’re in a shooting war, they could help us win. Why seven? Focus is key. DOD and the IC chasing a hundred shiny objects gets us nowhere fast.

Three of them are sprint technologies where the government will need to put considerable resources and focus on making advancements, perhaps even driving ahead of the private sector in innovation. Those three are communications networks that are secure and redundant; bioengineering, particularly in the defensive space; and then quantum computing and sensing.

Four of them are follow technologies where the government can follow industry’s innovation. Those are space-based technologies, including on-orbit upgrades and space-based sensing. Batteries – so they are highly – we’re going to have a highly networked and highly power-dependent fight. Resupply is going to be dangerous, and right now we’re asking expeditionary forces to carry an estimated 20 to 50 pounds of batteries with them for a three-day mission. AIML, which of course is a hot topic right now:  process signals, highlight developments, conduct cyber offense and defense, and help with decision-making both on the battlefield and off. And then robotics, especially the combination of robotics and AIML, which could be a game-changer for global competition.

We’re very excited to announce that we’ve compiled a whole bunch of information and recommendations about these technologies and others in a new website called Tech Recs. You can find it on the CSIS website and we’ll talk about that a little bit later in the event today.

So, moving to today’s discussion, we are here to discuss a combination of vision and humility. The U.S. government needs the vision to see the capabilities needed to prevail in competition, but also the humility to let the private sector lead and innovate as it does best.

I’m here today with some leading thinkers on these issues, on technology and national security. Let me introduce them as we get started.

First off we have Chris Brose, who is a leading thinker on technology issues in both government and industry. Right now he serves as the chief strategy officer of Anduril Industries, a venture-backed defense technology company. He’s also the author of “The Kill Chain:  Defending America in the Future of High-Tech Warfare.”  We were just talking about how this is required reading across Washington. Formerly a senior fellow at Carnegie. Formerly the staff director of the Senate Armed Services Committee and John McCain’s top adviser on military and foreign policy issues, senior editor of Foreign Policy, and as a policy advisor and chief speechwriter to then-Secretary of State Condoleezza Rice, which is where we first met once upon a time.

Chris Brose: Yeah.

Ms. Harding: We also have Colleen Laughlin, who has had a long and distinguished career in DOD leading the way on incorporating technology into defense practices. She’s right now the executive director of the Defense Innovation Board, formerly was the deputy director. We can talk later about whether you’re glad you got promoted or not. (Laughter.)  She served for 13 years in the Office of the Secretary of Defense, in the Action Group and as an advisor on critical global issues from Asia-Pacific to technology to humanitarian affairs, basically the whole spectrum.

We also have Geof Kahn. Geof has extensive leadership experience spanning intelligence, technology, cybersecurity, and government relations. Right now he’s a senior counselor at Palantir, where he helps lead parts of their intelligence and national security business. He previously was at Accenture leading their government relations function, also cybersecurity policy and technology policy. And previously he served as a senior advisor to CIA Director Pompeo. He also was on HPSCI once upon a time; we interacted there as well. And he began his career at the Office of Naval Intelligence and at CIA.

And then, last but not least, joining us online we have Sky Moore, who is transforming the way the military thinks about procurement and technology. Soon after she started her role as the CTO for CENTCOM, Sky and I sat down for a conversation and I came away going, wow, this is fantastic. She’s doing some really fabulous things down there. She was formerly director of science and technology for the Defense Innovation Board. She served as an advisor on technology issues in the House. And she still serves as an intelligence officer in the Navy Reserve. So thanks for joining us from sunny Florida, Sky.

All right. So, turning to today’s conversation, our project really starts in the future. In order to figure out what technologies we need for this era of competition, we wanted to start off by thinking about what that future of competition looks like, what war might look like, what peace looks like hopefully, also what intelligence work and measures short of war might look like. We identified two main scenarios for the future of competition.

One is the smolder. This is the measures short of war where we see great powers jockeying for position using all elements of state power.

The other one is the hot blast, and that’s where the aggressor nation would try to attempt a fait accompli before help can arrive. This is one of the lessons learned from Ukraine:  You want the war won before anybody can cross an ocean to come and help the guys who have been aggressed upon.

So, with the caveat that humans are really bad at predicting the future and we all know this, what do you think marks the future of great-power competition? What does warfighting look like to you up to 2030 and beyond? Chris, I’m going to pick on you first and then go around the table.

Mr. Brose:  So thank you so much for being here and congratulations on the report. It’s really great.

I think you’re right to sort of cabin it in, like, 2030. I think so much of the conversation about, quote/unquote, “the future of warfare” is set in the distant future and it’s, like, impossible to even perceive.

I think a lot of what we are going to have to do in the coming decade in the sort of competition space is recognizing that a lot of this is jockeying for position. It’s neither peace and hopefully it’s neither war, but it is a pretty sporty political-military-economic effort. We’re competing on technology. We’re figuring out strategic areas we need to decouple. And we’re essentially trying to win positions of advantage so that we can play a better long-term game, but it is a long-term game.

I think the scenario that you outlined in terms of hot blast is the thing that we’re constantly trying to avoid, and the only way you avoid that is having enough deterrent capability to be able to demonstrate clearly to your competitor and sort of create in the mind of that competitor every morning that a fight is never something that they want to pick. And I think that’s the big concern that I have had and still have right now, which is that the U.S. conventional deterrent position, specifically in the Asia-Pacific region, is and continues to erode. And if you sort of listen to – we can listen to the government sort of speak to this, but listening to what many government leaders are saying about, you know, kind of needing to be ready for some type of contingency in the late 2020s, I think that really focuses on the mind on how much urgency we have to bring to this because the kinds of technologies that you’re writing about in the report that we would have to marshal into capability for deterrence has to happen in the next few years so we can start pushing that timeline to the right rather than have it continue to creep back to us leftward.

Ms. Harding: Mmm hmm. I think that’s absolutely right. The marshaling into service in the reasonably near future so we’re ready for the contingency.


Geof Kahn: Those are great points. And I also want to thank you, and am frankly humbled by this really esteemed panel. This is – this is really wonderful.

I like to think a little bit historically and think about where has warfare gone to think about where it may be going. And when I think about the main wars we’ve been involved in, from the Gulf War to Iraq, Afghanistan, and the Global War on Terror, I think you’ve seen a very obvious trend about increasing use of data, increasing use of intelligence, and getting more and more targeted with our capabilities, which helps make warfare more effective but also reduces human casualties, which is – which is wonderful. The challenge, then, as you look at Ukraine, and yes, some of that is happening, but you also have, frankly, blood and metal going back and forth with large – very large losses of life, which are very unfortunate.

I think the takeaway I take away from those two observations and thinking about what a U.S.-China conflict is like is that it depends on who is fighting. And none of our recent experiences – and the Ukraine war included – are really good predictors of that. And so I think I’d go back to your first assumption:  I think we’re really bad at predicting what this is going to look like because we don’t – you know, given the capabilities that we are developing, given the capabilities that they’re developing, it's going to be really hard to predict what this is going to look like. So I agree with Chris; we need to try and get as ready as humanly possible so this endeavor doesn’t – never happens.

Ms. Harding:  Yeah. Amen to that.

Sky? You may be in Florida, but you’re sitting at our table.

Schuyler Moore:  Which I appreciate. Thank you so much for including me.

Yeah, so I mean, to us I think that the future of warfare is more focused around the mechanisms that enable you to adopt technologies faster and less so on a specific technology. So what I appreciated about the framing that you’ve given in the report is that the technologies, as you described them, are broader umbrellas under which there’s room for iteration and different types of technologies to evolve. So if I you take, say, degradable communications, your ability to – whatever country’s ability to adopt the latest and greatest is going to define whether or not they’re successful. If you’re seeing new LEO satellite constellations going up, if you’re seeing radio mesh network equipment coming out, and your ability to integrate it and put it into the field in a useful way is faster than your competitors, that is what is going to define competitive advantage. And so having those mechanisms in place that allow you to ingest, test, and then push out new technologies at a faster rate than others we believe is going to define whether or not you’re successful.

Ms. Harding:  Yeah. I definitely want to come back to this idea of iteration. That’s a really important one.


Colleen Laughlin:  Yeah. No, I think – thank you, first, for having us here. It’s a wonderful discussion. I’m looking forward to it.

You know, this idea of how we are going to accelerate the learning – and I think everyone’s sort of saying that, right? How are we going to master the approaches to the adoption and fielding of these technologies? And when I think about the risks coming forward with other countries, adversaries also going that, I think there’s opportunities and then there’s incredible risks. And just figuring out how we structure our organizations, our incentives, our learning, and our cycle times – so, you know, the idea of the iterative processes – I think is going to be hugely important.

Ms. Harding:

 Yeah. So the reason I wanted to go back to this iteration point, as you said, no one knows how this is going to play out. No one knows exactly how these technologies are going to make an impact on the battlefield.

One of my favorite recent events that we did in this room, actually, was a book launch for General Mick Ryan’s book called “White Sun War,” and it’s about – it’s supposed to be a history looking backwards at a future war over Taiwan. But one of his favorite characters in the book, and mine too, is a young Army captain who is a student of conflict, and she’s constantly working with her troops in the heat of the battle to say, OK, what did we do right, what did we do wrong, how can we do this better. And especially because all of, like, the combatants and the conflict are throwing technology at the problem, they’re all finding they have to iterate very quickly. One of the features of this conflict is these things called the beetles that can roll from the ocean onto the land and are AI-controlled robots that basically can, you know, take out a battalion of tanks before you even really know what’s happening.

So on that note, I want to ask the fun question, which is the novel question. Novels can help us expand our imaginations on what is possible in warfare. I really did enjoy Mick’s book. I also think that “Ghost Fleet” was one of those books that hit Washington like a splash in a pond and then created ripples all across the place. So what books have really shaped the way that you think about the future of warfare? Anybody want to go first?

Ms. Laughlin:  I mean, we are living in the simulation, right?

Ms. Harding: Yeah. (Laughter.)

Ms. Laughlin: Right. I mean, you know, the ideas of any number of books. And you know, “1984,” “Third Eye,” “Matrix.”  I mean, these are all – “Ghost Fleet” – just really imagining how these technologies can be applied. And sort of I think for me what becomes really interesting is thinking about the implications for human behavior or for morals and for values, right, and that’s really some of the interesting space that, you know, you think about. Like, how as a society are you going to think about the technologies, the applications, and what you’re fighting for? So –

Ms. Harding:  Mmm hmm. What you’re willing to do with those technologies.

Ms. Laughlin: That’s right. That’s right.

Ms. Harding:  How about you, Sky?

Schuyler Moore:  Mine might be a little bit off the wall, but truthfully I found “Ender’s Game” to be more interesting. Obviously, I read it when I was really young. And increasingly, I find it interesting to go back to because I think there’s an element of human centricity in the context of really exquisite technology that’s interesting and extremely applicable, certainly from what we’re seeing here at Central Command. Regardless of how exquisite of an algorithm or of a piece of hardware that you put out there, what we’re learning over and over again is that that friction point between where the human picks it up and applies their own intuition and their own understanding of semantics is so, so critically important to maintain.

We think sometimes of technology as this light switch between human control over to technology’s control, and then you flip it and suddenly a human steps back entirely, but I don’t think that actually reflects the reality of how these get deployed successfully. And I think that books like “Ender’s Game” really digs into the interesting facet of human cognition has a really important role. Human decision-making, human interaction with whatever technology, training on whatever technology you might have is a critical, critical piece regardless of the type of technology that you’re working with. So books like that, I think, help frame for everyone what it looks like, and it’s not just a void where there’s an empty room with a bunch of servers and no human in sight.

Ms. Harding:

 That’s one of my favorites as well:  “The enemy’s gate is down.”  (Laughter.)

Mr. Kahn: Building off what Sky said, I would also take it in a slightly different direction. When I saw you asked this question in a previous panel, the answer that occurred to me was the “Terminator” movies. And you think about just specifically “Terminator 2,” how you already know about this great commanding general – yes, he’s a kid in the movie, but he’s a super-accomplished military tactician – and then you bring in this super killing machine, the Terminator, and you know what – how crazy good both of them are independently. And then you watch as the movie goes on how they work to learn with each other, and more importantly how John Connor learns to control the machine and how better they are together. And so it really speaks to as we’re adopting all these emerging technologies the need to figure out how to do the human plus machining really, really well. And the other kind of – obviously, the key point of that movie is kind of what Sky was referring to, is as we are developing these things making sure that we always have the human in the loop and making sure that we are the ones controlling the outcomes and how these technologies get used.

Ms. Harding:

 Yeah. Definitely a follow-on project that I want to do is talking about how AI is going to change decision-making. In some ways it’s going to speed it up. I suspect in some ways it’s going to slow it way down. Policymakers always want more information in order to make a decision, and with instant communication and AI constantly churning out new answers you could have a constant drip of new information that would, paradoxically, slow down how you make decisions. But TBD. That’s a project for future study.

Mr. Brose:  So I completely agree with you that sort of fiction is one great way to kind of think about the future with imagination. I find that I actually kind of go back to the past. So I find myself reading a lot of history to sort of contemplate how things are going to change because when you sort of look back at periods of historical change where technology has really driven sort of political/social/military developments, you realize how long these trends were actually playing out, right? So there’s this tendency to look at 1914, for example, and say:  Oh my gosh, like, what a – what a transformation in warfare. You had machineguns and long-range artillery and all of these different airplanes, et cetera. But the reality is those technologies were emerging on battlefields going all the way back, arguably, to the American Civil War. And if you sort of study that sort of latter half of the 19th century – you know, the wars of German unification, the Boer War, things like that – what you see are armies grappling with exactly the questions that we’re wrestling with today. You have new technologies that change and sort of force them to change traditional doctrine, traditional sort of concepts of, you know, military virtue and things like that. And they’re struggling with how you bring these things in quickly, operationalize them, and generate military advantage. So the technologies change, the countries change, but I think there’s a fundamental kind of human element here of how do you wrestle with a significant period of change and sort of marshal those changes to your advantage when so much of it is unknown and all of your competitors are trying to do the same things.

Ms. Harding: Yeah. Absolutely. And in several of those wars that you mentioned, you know, the U.S. has been wildly successful in incorporating technology. There was a story, perhaps apocryphal, that the French, as soon as they saw the U.S. show up not with horses but with jeeps, they knew that the war was over and we were going to win.

So this time around, where are we in DOD and the IC incorporating these technologies into their day-to-day workings, into the future of conflict? In what ways does DOD and the IC need to adapt in order to better incorporate these technologies? Sky, I’m going to start with you because I know you’re doing a lot of good work on this at CENTCOM. Then I’m going to turn to our other current government employee for the official answer, and then we’ll turn to the gentlemen for the other pieces of what the government should be doing. So, Sky, over to you.

Ms. Moore: Sure. So, I mean, there are so many places where we’re integrating these different types of technologies. I think that maybe people traditionally think about hardware and unmanned systems, and it’s almost more interesting to talk about the software capabilities and algorithmic analytics that are evolving and increasingly getting integrated into our workflows.

A particular example I can give is a problem set that’s certainly familiar to Ukraine and also to other regions, which is the prospect of points of interest, maintaining custody over a point of interest, evolving it into a target, validating that it can and should be, and pushing that up the appropriate chain of command. And the previous ways that that has been done has been intensely, intensely manual. It has been data entry into a PDF, into a PowerPoint to check against somebody else’s PowerPoint to see if you have the same numbers. By the time it gets pushed to the decision-maker, you’re hours out of date and you have to run the entire process all over again.

And thinking about where you might be able to integrate both digital workflows and analytics into that process to improve it is somewhere where I actually think we’re making really good progress. This is somewhere where previously it was more of a concept of whether you talked about JADC2 or some other format of digital warfighting, we talked about – we had our PowerPoints with all of the lightning bolts, but we’re getting into the weeds now where you can take a specific workflow and say, OK, if I needed to pass this target along, and I wanted everyone to have situational awareness of where it was and who updated it last based on what data and at what time, we have the ability to do that now. And I think that we’re learning a lot especially at Central Command about where you can particularly apply algorithmic analytics and where it might be sufficient to simply use statistical modeling or even just digitizing your workflow is enough to add that time that gives your decision-maker space.

So, for example, when we think about computer vision, it’s not that we’re going to use computer vision to identify where a point of interest is and then execute all the way through to the end decision of taking action against it; it’s that you will help an analyst who is otherwise scanning a huge mass of geography for a point of interest. And cutting their time even by 30 percent, 40 percent, higher than that, that alone is a huge, huge benefit. So using it more so as a burden-reducing capability rather than a decision-making capability, and reframing that conversation so that people understand what to expect, has been really useful for us. So it’s really exciting to see these algorithmic programs, in particular, running in our region.

And one last thing that I’ll leave you with is that we are increasingly learning how important it is to run these types of technology adoption efforts in theater, in a live environment, with live data. How could you expect a computer vision algorithm to know what a dhow in the Arabian Gulf carrying weapons look like if it’s never seen a picture of a dhow before? There are geographic oddities and nuances, there are patterns of life that are different from region to region that you cannot learn unless a model is running in theater. And so increasingly pushing these efforts out to the edge is going to be so, so important.

Ms. Harding: Yeah, absolutely.


Ms. Laughlin: So maybe taking a small step back, right, innovation – innovative tech – and Schuyler’s doing great things out at CENTCOM. But innovative tech is disruptive, right? It’s disruptive in its application, it’s disruptive in the industries, it’s disruptive in the relationships. And I think in some of that space is where the department is, quite frankly, on the learning – it’s on its learning cycle, right? Understanding how to adjust its incentives, its organizations, its decision spaces, right, for how to adjust to the new realities of emerging technology.

And so in that regard I think where are we doing well? I think there is an awareness that we are at a critical point where we need to make some big changes. We have incredible pockets of great things happening. You know, whether it’s Defense Innovation Unit, whether it’s the service innovation organizations. But I think some of it is really needing to take a step back and think about, like, what are the incentives driving what we’re trying to get at?

And you mentioned sort of sprint technologies versus fast follower technologies. And those ecosystems, those decision-making architectures, those resources look very different. And I don’t know in an organization like DOD, right, that runs sort of very efficiently on process, how do you start to nuance some of those spaces so that you are adjusting to the sort of need of – you know, whether it’s sprint. There’s a totally different investment model. There’s a totally different signaling model. There’s a totally different set of organizations that are at play there.

Versus a fast-follower model which, you know, I think – I think we are doing well, and with the synergies you’re seeing kind of out at CENTCOM and COCOMs with CTO activities, you know, that’s where that experimentation needs to happen. And, you know, I think then the sort of third phase of that is then how are you scaling and deploying? And then you also have to think about the talent, right? The talent is going to need to look different. You’re going to have to have different skills, both at headquarters and deployed. And so just understanding that space I think is the kind of learning ramp that we are in right now.

And, you know, innovative tech is – it creates friction, right, everywhere. And so I think even in the building you’re seeing. It’s a healthy friction. I’m going to put that on the table and see what the gentlemen say. (Laughter.)

Ms. Harding:  I think it’s a healthy friction. One thing we talk about in the paper is the 80 percent solution and the 100 percent solution. And the government needs to think about these in two totally different categories. There are times when the 100 percent solution is the only solution that will do. There needs to be the bespoke, the perfect widget for the mission, and that’s the only thing that’ll suffice. But there are a lot of other times when the off-the-shelf technology will get you 80 percent of the way to what you need and that extra 20 percent means, you know, years of delay, lots of extra money, and just isn’t really that worth it. So thinking about acquisition in the 100 percent case, and also in the 80 percent case, is something I’d love see DOD shift to. The IC as well.

Mr. Brose:  Yeah. So I think it’s too soon to tell. You know, I think there’s a lot of talk about solutions, but what I have found really refreshing – and Sky kind of got at this, and I think a lot of the approach that she and the team at CENTCOM are taking is sort of in line with this, which is actually being more problem-focused. We can talk about these broad categories of technology, like AI, ML, but the question for defense, for national security, comes back:  Well, what is it actually going to do for my mission? There’s no points for using artificial intelligence. There’s no, like, creative interpretation, right? It’s whether it actually generates a better outcome, cheaper outcome, a more efficient, effective outcome, or not. And if it doesn’t, it won’t get used. The Defense Department, as it always will, will just throw tons of people and tons of money at those problems, and those kill chains will be as manual as Sky said they are, and that’s right.

I think what we found, you know, is the places that are really focusing on the problems that they’re trying to get these technologies to solve are actually doing that iteration and allowing those technologies to really improve in the speed that they’re capable of improving. So, you know, at Anduril we’ve done for the past several years now a lot of work on counter-drone actually out in the CENTCOM region. And we’ve watched that threat go from, you know, small quadcopters that we’re, you know, sort of harassing bases, to large, Iranian, fixed-wing, group-three UAS that are effectively low-cost cruise missiles. And you have to learn and sort of evolve your technology to keep pace with that threat.

And, you know, we find that you can’t make sense of the world through computer vision. You know, it’s a lot of the challenges that Sky was mentioning. You only find out in those kinds of operational environments that birds look like drones, and clouds are really problematic, and sun glare kills you. So you have to be able to fuse lots of different capabilities together. So characterizing the target not just with computer vision, but with radar, and ELINT, and all the different sensor capabilities that exist many times in the region. And so about how you bring those things together to solve an operational problem.

And I think once you’ve then done that, I mean, the real challenge that I think the government is having right now is getting things that are working to real scale. There are phenomenal examples of pockets of innovation, success stories, you know, a lot that, you know, Task Force 59 has been doing. The question really comes to, like, how are going to take the things that are actually generating operational benefit, improving the mission and sort of work that our operators are doing out at the edge, downrange, and getting that to real scale, and transitioning them to programs that can make a disruptive difference at scale. That’s where I think, you know, we still need to do a lot more.

Ms. Harding: Mmm hmm. I want to come back to you in just a second and talk about that scale problem, Colleen, and let Sky talk a little bit on what she’s doing out at CENTCOM. We were in a meeting the other day with the guys at OSC, the new shop at the Pentagon that’s looking to increase investment in some of these core areas. And one of us said something about the valley of death. And they were like, no, no, shh, shh, we don’t talk about that here. Like, that’s exactly what we’re trying to get rid of. So this at-scale thing, I think, is going to be where the rubber meets the road.

But, to you.

Mr. Kahn: No, so I think all – these are all great points. And I think when I sit back and listen to all these great ideas, what I’m seeing – as a person who’s been focused most of my career on the intelligence space – is that DOD is trying a lot of different things in a lot of different spaces. And I’m seeing some really interesting progress. I think Chris’ point about getting up to scale is dead on. And when I think about my point of – my perspective, I see the challenge for the intelligence community is slightly different. And I think there’s an opportunity to work together, right?

So the IC figured out a decade ago how to adopt infrastructure as a service at scale, right? They did it for the entire IC, largely at the direction of DNI Clapper and Al Tarasiuk, the IC CIO at the time. That was great. They haven’t yet figured out how to do software as a service at scale. Kind of to both their points, you know, you don’t get points for buying AI, but you do need software to do a lot of these things. And so a lot of the things they’re talking about that DOD has done quite well is adopting software.

So I think there’s probably some really good place for DOD and IC to actually learn from each other and partner. And this way, you know, we’re not reinventing processes. We’re not reinventing acquisition solutions. You know, there’s just, frankly, a lot of government-to-government collaboration that could work here.

Ms. Harding:  Yeah. So let’s talk about scale. How are you seeing these pockets of innovation? And how are we expanding them to try to be bigger, to be scaled?

Ms. Laughlin: Yeah, so I think the interesting part of how the department needs to tackle this question of how to deploy at scale is you have technologies that are evolving on a pretty rapid cycle, and we have processes and systems that are built, quite frankly, for hardware, like, large, exquisite hardware, that that time horizon matched, right? The sort of scale and time horizon matched. And so I think therein – right, therein lies the rub of how does the department shift its acquisition approaches? How does it shift its tools? How does it optimize for speed?

And I think that’s – our optimization right now is for larger, hardware-based systems. And we need to create just a new, you know, ecosystem around that in the department. But it’s not – you know, I’d be curious kind of to Schuyler’s perspective here about when we think about being more problem, user focused. I think that’s where we’re hitting that friction, is you can solve the discrete problem, but then how do you scale that if that problem changes, or if the end user – the different end user has a different problem? So I think that’s what we’re sort of grappling with.

Ms. Harding: All right, Sky, brag on CENTCOM. What’s going on down there? (Laughter.)

Ms. Moore: I think that there’s an interesting example here of what folks are describing, where there is a gap between how we experience tech adoption that is problem focused and capability focused, and then the broader structure that we suddenly have to fit it into on the back end if we want to have access to that capability three, four years from now. An example of that is with Task Force 59, we’re working with unmanned surface vessels. That’s the task force that’s focused on unmanned and AI systems out in Fifth Fleet.

We were watching evolutions in unmanned surface vessels that meant that a piece of kit that we thought was exquisite six months prior might be completely irrelevant after that six months because we’d either found something better or even the company itself had updated it. And so based on that pace of repetition, it’s hard to then think about how that fits into a program of record process, where you’re supposed to define quite specifically the technology that you want to buy three, four, five years from now.

The reality is that if you’d asked us how we really would define what we wanted, we wanted persistent ISR capability in the maritime space. That’s a higher-level bucket of capability that you would describe. And whatever fell under that, whether an unmanned system, whether a manned system, whatever met your need best at the best cost, would ideally be what you would fit into that. But currently, we are in an interesting space between identifying rapidly evolving capabilities and then seeing a more rigid structure that doesn’t allow for the pace of update that we see coming out, both from commercial sector and from our own iterations with the technology.

Ms. Harding:  So any advice on how to improve that cycle?

Ms. Moore:  We’ve had interesting conversations about reframing programs of record as capabilities of record, where we think that, again, 59 has been able to somewhat bridge this gap between they have – they managed their way through that first 12 months to 24 months, the valley of death, and they got themselves through in the POM, so they now have more sustained funding. But their mission, writ large, is more capability focused than it is technology.

They’re not being funded to buy 12 of a specific type of unmanned surface vessel. They’re being funded to execute persistent maritime surveillance in the waters around a region. And those are lessons that we can certainly transfer to other regions. That’s not a problem set that is unique to Central Command. We obviously focus our mission on that, but we recognize and think that it’s very important to share those lessons out. And so if folks are willing to engage in discussions that describe the funding more in terms of capability than in terms of a particular technology, I think we’ll be in better shape.

I think sometimes we make the mistake of thinking of technology as an end state rather than as an enabler. And saying that:  I must buy this particular piece of hardware, or this particular type of software. And the reality is as you keep it more problem-focused, it’ll give you flexibility to adjust depending on the new types of kit that’s coming out over time. And to our conversation at the beginning, I think that’s what’s going to define competitive advantage. It’s your ability to ingest the rapidly evolving technology that is increasingly coming out of commercial sector. And that requires really thinking of concepts of problems, rather than very specific tech.

Ms. Harding: Yeah, absolutely. Outcomes-oriented, capabilities-oriented; not, like, what widget can I put in what warehouse? That’s not the point.

I want to remind our audience that there is a button on the website that you can click in order to ask a question to our esteemed panel here. So go ahead and pipe those through, and I’ll receive them on my iPad.

Meanwhile, I want to talk about that commercial market and that commercial leadership a little bit. It’s going to keep growing. The government has done some good things, taking advantage of commercial markets, but maybe not – has not done enough. So I want to ask you guys how the government can do better taking advantage of the commercial market, partnering with industry rather than giving super detailed lists of critical elements of a particular piece of kit that they want. And then specifically talk a little bit about AI and this coming world of AI.

So, Geof, why don’t you start us off talking about government incorporation of commercial technologies, given your current position?

Mr. Kahn:   Sure, sure. I think it’s a little bit about kind of setting a more advanced vision for what they’re trying to achieve from capability perspective. If I am the director of national intelligence, I think I want to acquire software the way I buy apps on an iPhone, right? I need – I want my accreditation to happen once. I want my operators and analysts to be able to go try something out, download it, does that work, eh, no? Delete it. Go on, try the next one. To be able to take the really interesting software, test it out, put it in operators’ hands, find out what’s really useful.

And when it’s used, then people will pay for it. But when it’s not, let’s throw it out and try the next one. That kind of iteration software as a service at scale, that’s the kind of vision that I’d like our leaders to try and be able to hold out. And then kind of at the same point, to something that Colleen mentioned earlier, there is a new way of thinking about it. You know, a lot of our – a lot of our defense industrial base is defined by these traditional prime contractors. I think we need to start thinking about what a software prime looks like.

Ms. Harding: Hmm. Are you volunteering?

Mr. Kahn: There’s thousands of really interesting companies. I think part of setting up a vision like that is to enable a lot of these smaller companies, who have to make tradeoffs about whether or not they’re going to invest in pursuing DOD or IC contracts, because it is harder than going after commercial contracts. So that kind of vision enables lots of companies to compete.

Ms. Harding:  Yeah. It is so hard. When we talk to growing companies out in industry, they have that internal debate about do I even want to try to engage with the federal government? It’s so difficult to get approved to operate on various systems, and even just to jump through the hurdles of applying for doing contracting. You know, they can’t yet afford an army of lawyers to help them work through the contracting process.

You want to jump in on it?

Mr. Brose:  Yeah. (Coughs.)  Sorry.

I think there’s some, I think, benefit of just kind of stating clearly that I think we are never going to get where we are using the system as it’s been developed. I would contend that that system was put in place decades ago to manage exactly the kinds of things we’re talking about, right? Large, capital-intensive, very hardware-centric, industrial-age kind of military systems.

Ms. Harding: Let’s buy a submarine.

Mr. Brose:  Yeah. And I think, you know, the acquisition comes in for a lot of hate and blame, and they deserve their fair share. But the problem is so much broader and more systemic than that. It’s the entire way the government thinks about technology, because it thinks about it through the lens of military power has, you know, traditionally been built on the backs of these big, industrial hardware systems.

And that’s the entire system that we’ve set up, from I’m going to define my requirements and get my requirement for a next, you know, destroyer is going to look shockingly similar to the destroyer I’ve had for 40 years. There’s not many companies in the world, or in America, that are going to build those things. You’re going to go through long government-funded research and development cycles to develop that technology. You’re not going to really buy a large quantity of those things. And then you’re going to keep them in inventory, and operate them, and maintain them for decades.

That system, you know, the PPBE Commission is looking hard at how to make that system less bad. I commend them. The things we’re talking about here need an entirely alternative pathway. And I think it’s – you know, a lot of the conversation around the table’s begun sort of highlighting that, of, like, don’t focus on requirements and trying to predict the future of exactly what technology we’re going to need in 2033, because – I don’t know anything; I know that that will never be right. And we will get exactly what we asked for, even if it’s actually obsolete and overtaken by events.

The approach that I think, you know, the system has been built on – you can go back and look at the history of it – is literally, like, trying to recreate a Soviet-type system to do away with the, you know, vicissitudes of capitalism. I actually think that the challenge for getting these kinds of technologies into the department has to do more with, like, getting capitalism into defense in a more meaningful way. Where technologies like this, the government’s behind all of them. The question is not whether, you know, they’re going to continue to develop in the private sector.

The question is how can the government actually create the incentives for companies to bring them into defense, not have an absolutely horrific, long experience of doing that and then wash out, but actually be able to compete on their merits and get to scale if they’re successful. And that looks less like the sort of statist system that we have, you know, managing these very large, industrial systems. It looks more like market creation, where the government as the sort of monopsonistic buyer of national defense has to create the incentives for markets to become, frankly, formed around a lot of these emerging defense technologies.

And it goes well beyond, you know, kind of AI as a category. It gets into the actual, specific applications of those things, whether it’s, you know, AI-enabled targeting systems, or loitering munitions, or, you know, anything in between. Where the technology can develop and improve significantly quickly, where if the government just buys it at a large scale and a faster rate, more companies will actually rush in to compete. They’ll actually fund the development of that technology with their own money, rather than having to put it on the back of the government.

And you get an entirely different sort of market cycle. It’s not going to exist in, you know, GBSDs and B-21s, and aircraft carriers. That’s just always going to be what it is. But in these kinds of areas, the stuff that we’re talking about here and the report highlights, we just need an entirely different approach to thinking about how those technologies are brought into the defense space.

Ms. Harding: Yeah, I would love to see – (coughs) – excuse me – a zero-based review on acquisition for software, just full stop, because it’s a different – it’s a totally different mindset.

Mr. Brose:  Just one last point on that, right? I mean, you know, Anduril is, first and foremost, I think, a software company. But, you know, I think software and hardware are always going to be, you know, intimately involved with one another. I mean, there are certainly going to be enterprise things for software purely, but I think so much of the challenge here is the ability for software and hardware to actually work together. That’s where the integration challenge is. That’s where a lot of the technical risk is. And, frankly, it’s hard to build software if you don’t understand the sensors that you’re processing, or the, you know, systems that you’re commanding and controlling.

You know, you have to have, you know, sort of the ability to work across both of those disciplines to really solve problems, because there are some areas in defense where the answer is purely software. But in many cases, it’s both. And even in the cases where it’s purely software, you know, the hardware challenges and sort of particularities of how you make that lead you to a certain set of solutions on the software side, as opposed to others.

Ms. Harding: Mmm hmm. So, given that necessity for integration, you want to facilitate smaller organizations joining this market, bringing capitalism into government, as you’re saying. But you also sort of need the primes, right, who can navigate this space. How do you try and balance that?

Mr. Brose:

 I wouldn’t necessarily agree that you need them. I think you need them in certain areas where, you know, they’re going to – they’re going to build certain things that they have traditionally built. You know, you’re not going to get a small startup to compete in, you know, long-range bomber type areas, right? I think it’s more about how you actually get these small companies in where they can compete on their merits, and actually bring solutions to the government. I think so often the challenge for the small company is they have a piece of a solution, but they don’t know how that piece fits into what the government is trying to do or what some established program is trying to do.

So I think part of it is on industry to say, look, don’t show up with piece parts. Show up with solutions to problems. And that’s going to be the quickest way to actually get your capabilities adopted. Because if you’re actually helping to solve a problem that a war fighter in CENTCOM or in the Indo-Pacific region has, it’s still going to be an uphill climb with rocks in your bag, but it goes a heck of a lot faster than it’s, like, you gave me an algorithm and I don’t know what to do with that. So I think part of this is about industry needing to think about bringing solutions, not just technology.

Ms. Harding:  Do you want to jump in on that, Colleen?

Ms. Laughlin:  Oh, I was going to say as you were talking, Chris, you know, the idea of how the government can signal and create the right kind of incentives and more competitive market space internally is interesting. But also, how we think about creating incentives between the industrial base and the innovation base, I think, is a space that we have not done maybe quite enough thinking on in DOD around what do we want that relationship to look like, the interaction between kind of those two technological centers of gravity, if you will, so that we can help create the right environments for if we need the big things, we know the primes can do that. But we are sort of giving smaller companies access to the end user or to the problem statement, to knowing how to work and fit into the capabilities and the problem sets that we have. So, yeah.  

Ms. Harding:  Good.

Mr. Brose:  Yeah, I think it’s also about, you know, recognizing sort of where competitiveness and comparative advantage lies. Anduril’s not going to build an aircraft carrier. I mean, I would contend that there are a lot of areas of capability where the smaller, kind of upstart companies, are going to be able to out-develop, out innovate, and out-build in many cases, kind of traditional defense industry. And you need both, right? This isn’t, like, the one or the other is better. It’s like apples and oranges, and we need the whole thing.

I would just – you know, I find that time and time again people in government will say, well, you know, yeah, you can innovate. That’s interesting. But can you produce at scale? And I would turn the question back to say, I don’t think our industrial base produces at scale. Like, building 50 airplanes a year is not scale when, you know, the Tesla factory down the street is churning out thousands of vehicles every month. Like that’s the, I think, step change that we need to be thinking about in terms of producing – you know, putting the software aside – to one side.

I mean, many of the kinds of things that Sky is talking about with Task Force 59, I would argue it’s great that we have, you know, unmanned surface vehicles in the water. We need hundreds of them. And if you’re talking about Indo-Pacific, it’s thousands, right? So, like, where is the demand signal to get to that scale? Which I would argue is the only time you’re actually going to see the value of what those kinds of autonomous systems can actually produce. It’s not in the ones and the twos or the dozen or the two dozen. It’s in the hundreds and thousands.

And I think we’re a far way from even being able to imagine that as the right answer, let alone realizing that that future is something that we could have inside the next five years if we create the right incentives and get serious about it.

Ms. Harding:  Right. I mean, our industrial base is struggling right now just to backfill from Ukraine, much less to create, you know, several thousand autonomous sea drones. That’s a whole different scale of production.

Ms. Brose:  And those are hard problems. Like, producing F-35s is a hard problem. You know, producing a sail drone is a hard problem. It’s just a totally different beast. And you can do it a lot faster. And you can be, you know, much more, you know, nimble in how you’re doing that. You can produce a lot more of them for a lower cost.

Ms. Harding: So on this note of how you can actually put these practices into use, I want to flag for our audience that we are launching a new website today. It’s called Tech Recs. It is a curated database of technology policy recommendations from all across CSIS. Right now, CSIS ISP, but we’re going to expand it hopefully very soon. And the idea was this was a one-stop shop where policymakers could come and say:  I want to learn more about AI. I want to hear the best recommendations that exist right now on creating defense for bioengineering. Where can I find that easily? So this is your spot.

Pull it up on the screen, and you can see that you can sort by the technology that you’re looking at. You can also sort by the actor. So if you want to know, for example, you know, what DOD should be doing in a particular space, you can pull it up and sort by the actor that’s important. Also by the type of recommendation, and the status of that recommendation. So just in time for markup season, here is a whole slate of recommendations for how we can improve how the government thinks about technology.

On that note, let’s talk a little bit about AI. I guarantee you that everyone at this table has at least played with ChatGPT. When I was heading down to North Carolina to visit family for Christmas, I figured this was a great way to entertain my children for eight hours. So I handed them a tablet with ChatGPT and said, you know, go learn. They were hooked very quickly. I had to then have the conversation with them about how this is not for cheating on your homework. This is for learning about the world and for being a starting point for other in-depth research that you might do.

But today’s kids in the backseat playing with ChatGPT and cheating on their homework are going to be writing tomorrow’s PDBs. They’re going to be out in the field. They’re going to be the future of Task Force 59. So what does that mean for the way we should be thinking about this technology now? Geof.

Mr. Kahn: Sure. So I’ll jump off, actually, your example of writing a PBD because actually Linda Weissgold, CIA’s head of analysis, got asked this question a couple of months ago. And she didn’t get the chance to kind of build off her response, but her response was dead on. This technology’s not going to be used to write PDBs, probably ever. And it shouldn’t be. That is a super customer service oriented, human-to-human interaction about, you know, supplying information to the leader of the country. The challenge – or, the opportunity, though, is that it does – these technologies will enable a really strong advancement in intelligence, analysis, and tradecraft.

If I get a new piece of signals intelligence that is related to Iran’s nuclear program, I can be able to ask something like this:  Give me five scenarios of what this could mean? It could help with red teaming. It can help with identifying assumptions. It can help with challenging assumptions and identifying gaps. So those types of things that currently require lots of inclusive conversations, which still need to be had, can just be prompted and enabled in this kind of human-plus-machine environment. So I’m really excited about the types of tradecraft they can advance in the intelligence space.

Ms. Harding: I actually, after I finished the report – just to be clear that I did not cheat on my own homework – I asked ChatGPT, you know, what are the critical technologies for winning the next war? And it pulled four of them. So I felt like, you know, OK, I’m reasonably in sync with the rest of the internet.

Mr. Brose:  It’s all within average.  

Ms. Harding: (Laughs.)  Exactly.  

Ms. Laughlin: So I think for DOD really thinking about where and how you’re going to apply these technologies – and I know we were having this conversation a little bit earlier, that you don’t just need to kind of sprinkle or throw AI at everything. But understanding where we can move humans higher up the kind of intellectual food chain to do more intensive thinking and analysis and decision making. And, you know, we’re kind of at the front end of that, really figuring out where we want to apply that.

Ms. Harding:  Anything to add here?

Mr. Brose:  Yeah, I would echo that. And, you know, at the risk of starting an argument – (laughter) – I could imagine a world where ChatGPT, or something like it, is writing PDBs. They’re not, you know, maybe emailing them to the president, but they’re writing the first draft, right, in the same way that I’m sure, you know, really smart 23-year-old analysts are doing the first draft of PDBs, but they’re doing it under the supervision of analysts who have been there studying these problems, working these problems, for a very long time.

Like, I think that’s the way this is going to emerge, right? So, like, one take away of ChatGPT is –

Ms. Harding:  First, it needs a better name, so.

Mr. Brose: Yeah, it totally does. This technology exists, right? Like, we’re not talking about photon torpedoes and cloaking devices here. We’re talking about things that, like, people are using in their day-to-day lives that we need to get into government service in one form or another. But I think ultimately this comes down to – all these different technologies come down to the question of automation and autonomy, right? And they’re not actually the same word.

Autonomy is a degree of kind of computer or machine control that a human delegates, but still has supervision over. And I think as these kinds of technologies become more intelligent and more capable, they can start doing a lot of the work that we are currently using large numbers of human beings in very slow and very manual and, frankly, very error-intensive ways, to perform. So it’s a question of, you know, exactly as Colleen mentioned, like, reallocating our limited amount of human capital to, like, higher order functions.

So, you know, I can’t wait for ChatGPT to write all of my emails. I don’t want them – I don’t want them to send them. I will do that. But, like, boy, if they could take the first draft history, that would be fantastic. And I think that, to me, is, like, a value-enhancing role that that technology can perform in my life. And I can imagine about 500 ways that it could do similar kinds of things across the government, save people time, make them more effective, let them actually focus their brains on higher-order decision making and problems, rather than just, like, very technical, mundane, frankly, drudgery that we still have way too many human beings in defense doing.

Ms. Harding: So, on a separate long road trip, I handed my phone to my daughter. And I was, like, you’re going to read me my emails. And you’re going to respond to those emails for me. And after about 10 minutes of this she looked at me and went, Mom, this is so boring. (Laughter.)  I know, honey. So, yes.

Mr. Brose:  Just a quick point on that. So it’s – you know, we’re talking about ChatGPT, but it’s also, you know, how you drive autonomy through so much of what we do in national defense. Whether it’s flying aircraft, or processing sensors, or, you know, mission planning, or oversight and orchestration and operations. So much of these areas are areas where we can go faster, we can operate at greater scales. We’ll make different problems or mistakes, but we’ll still make mistakes. But you can actually just better optimize the human beings that we have, and will always have a limited number of, to just far better – far better use than too often we have to put them to today.

Ms. Harding: Yeah. I’m not going to fight with you about it. The report that I wrote about a year ago on OSCAR was envisioning this same thing, where you could have an AI-enabled bot, basically, that was helping you with the first draft of a lot of things that you needed, or pulling the noise – or, pulling the signal out of the noise. I wouldn’t necessarily want that assistant writing a PDB. However, I would love for that assistant to write an executive update.

Back when I was at the agency, one of the tasks that we had during an actual national security crisis was that you had analysts in all night. And they were basically pulling any kind of intel that came in overnight and then writing an update that could be published at 5:00 in the morning, go out to all the PDB recipients, that was basically here’s what happened overnight. Here’s what you missed.

Those tasks often went to some of the more junior analysts, who didn’t need really intensive good tradecraft yet, but they needed to have a good eye and to be able to read a lot in the overnight hours. And to have an AI technology that could basically do that reading, write the first draft, and then create a piece that the analyst could then edit, as opposed to spending all night at work, I mean, that would be fabulous.

Would have to be dependable. No hallucinating. (Laughs.)  But it would be – it would be fabulous.

Ms. Laughlin: That’s right. And it’s a whole different skillset that you kind of have to adjust to, I think, in that space of, you know, fidelity and trust of the information and how it’s coming together. And to me, that’s another, you know, thinking about the talent that you need to be agile and to adapt to the emerging technology. It’s a space we need to dig into more too.

Ms. Moore:  I think it’s worth asking a question of what are you actually trying to use – whether it’s a large language model or otherwise – what are you actually trying to use it for? And I think that sometimes it’s easy to say, I want this to give me the answer, rather than I want this to point me in the right direction. And which of those is it actually best at? And I will – I will give a hot take that I think for schools, or writing emails, or writing a PDB, you will – 10 years from now, people may consider you negligent in your work, in your schoolwork, if you haven’t leveraged a large language model at the very beginning to point you in the right direction. It’s like saying that you refuse to use the internet for writing a paper because the resource – (off mic).

And so, again, distinguishing between – the difference between point me in the right direction because you, algorithm, are exceptionally good at ingesting large amounts of data and creating preliminary content, and I am very good at the refining process and adding nuance and semantics to what you then spit out the back end, and I will refine the product that you will be able to cut 30, 50, 75 percent of my time, making sure that we understand what it is actually good at. Because I think that we may have flipped the order, especially in the case of large language models like ChatGPT, where the assumption is that it will give you the answer. And the reality of what the actual value there is that it’s getting you much further down the road as a starting point than you may have been otherwise.

Ms. Harding: Yeah. I think it’s a really critical point. In our last five minutes, so I wanted to go to a couple audience questions and then hopefully end on a somewhat positive note.

We have a question from Shingo Kinugawa – I really hope I didn’t butcher your name – from Sumitomo:  What is your view on collaboration between defense industry and nondefense industry, especially in these seven critical technologies? How can we pull not traditional defense industry folks into this space? This is something I know that you’ve worked on too.

Mr. Brose: I think in a sort of very brief way, it’s incentives, right? I mean, I think, you know, we’ve been kind of having this conversation, you know, with much gnashing of teeth and banging of tables about, you know, does the technology sector want to work in defense? How do we get Silicon Valley and Washington to sort of understand one another? And there’s a lot of aspects and threads to pull, but I think at a base level it comes down to there’s an aspect of this that’s economics 101, right? Like, engineers want to solve hard problems. Companies want to be successful. They want to grow, they want to hire more workers, build new technologies.

If defense is an area where they can do that, and they can do that on timelines that are relevant to them, right, that move at the speed that technology moves, rather than at the speed that the government moves, if they can work the way that they want to work, rather than having to bill, you know, hours like corporate lawyers on cost-plus contracts all the time, you’re going to find more people who are actually moving in to do this, because the problems are incredibly hard, they’re incredibly compelling, they’re incredibly important. And I think that’s the piece that we need to get right.

And it’s not to say the problem will take care of itself, but if companies and engineers outside of defense see defense as an avenue where they can become successful, larger companies, wealthier companies, develop and scale products at scale, shockingly, you know, a capitalist society will kind of run to provide solutions like that in a way that, frankly, we haven’t, because we’ve created massive barriers to entry that have prevented it.

Ms. Harding: I think that’s right. A lot of the questions that we’re getting are about the security of these new technologies, and how you balance the pushing forward and accepting risk in the new technologies, and then also trying to be sure that technologies are secure as you launch into using them. Any thoughts on how to strike that balance?

Mr. Kahn: I mean, it’s on the companies themselves to figure out how to be secure. The government sets very clear, very transparent requirements. It’s a high bar. But once you meet those – or, I guess, if they’re looking for an area of improvement here, is once you’ve met the bar, you should be able to sell around, right? I think the Congress wrote some language last year about the transferability of – once you get your initial ability to operate, that should be able to share across agencies. I shouldn’t have to go from this mission center which has approved me, OK, now to go to this mission center I have to go through the same exact vetting process and answer the same questions within an agency, and then to another agency, and then from department to department. That gets to be a bit much. But there are very transparent, very high security requirements. And if companies are willing to meet it, then – you know, that is part of the barrier to entry, but it’s an important, legitimate barrier of entry.

Ms. Harding: I think this also gets back to the human piece and ethics piece that we were talking about at the beginning. Do you want to jump in on that at all?

Ms. Laughlin: Well, I was going to say too, in, like, thinking about how you do the experimentation and sort of sandboxing, and where you can play with things, and learn, and understand, you know, not every product or every company, you know, we do make it extremely difficult working with us, because it’s some of the highest bars we have on the different tech specs. But so I think there’s just – you know, we need to think about how to pull that down a little, but also, you know, these are very serious problem sets that we’re addressing. And you can’t kind of cut corners on those either.

Mr. Kahn: But going also back to Chris’ earlier point about creating a market, our companies should be selling themselves not just to the government but to the private sector based on their ability to develop – to provide privacy, to provide civil liberties, to provide control over data. That’s incredibly important. And frankly, it is one of the reasons why I use an iPhone, right? It is theoretically more secure. So it is part of creating a market.

Ms. Harding: Yeah. One of the recommendations that we make in the report is that security is important, and security should be built in from the beginning. However, the security word on the technology shouldn’t necessarily be the last word. There are going to be times when it’s worth taking a risk on something that’s untested, and maybe that makes our wonderful security officers nervous, but there needs to be an appeal mechanism past that, where somebody at the head of an organization can say:  You know, what? I am acknowledging that this is risky. I am personally accepting this risk. And we’re going to move forward with it anyway, despite the concerns. I mean, sometimes it’s worth taking a risk.

I want to do a quick – yeah, go ahead, Sky.

Ms. Moore: One quick note there. I also think that it’s worth considering whether or not our assessment of security matches reality. And what I mean by that is I had a rather entertaining conversation at one point with a commercial company who had never worked with the department and was trying to understand how they might work with us. And they offered that their encryption standard was banking standard. And our whole team immediately said, no, no, we can’t do that. It has to be military encryption standard. And we went back and forth and back and forth.

And then finally, we did a little bit of digging on what the actual encryption standard was and what the bits associated were. And it turns out that what we consider military standard encryption, 128-bit encryption at secret, and then you double that, was actually significantly lower than what financial standard was. And so the way that we were describing and talking about security, A, needed to be translated for our commercial partners to even understand what our requirements were. But, B, may have been founded on incorrect assumptions that our security standards always outpace commercial.

Ms. Harding: Yeah. That’s a really good point, and a good one to end on. We tried to, in this paper, talk about creating the urgency for innovation while also keeping the peace in great power competition. We didn’t get a conversation about trying to keep that peace, but I think that that’s an important part here as well. I think all of us who study these issues study them not to prepare for war, but in the hopes of keeping the peace, that we never get there and we can all prosper together. To the folks who are online, the new paper is on our website. You can find it at Seven Critical Technologies for Winning the Next War. The new website is Tech Recs. You can go there and play with the sliders for all of the different recommendations that we have.

I want to say a hardy thank you to this all-star panel. Fabulous conversation. I could sit here and talk for, you know, two more days over a lovely glass of wine and try to solve some more of these problems over time. I hope to have you back soon. Thank you so much.