The Past, Present, and Future of AI and Autonomy at the DOD with the Honorable Dr. Will Roper
Available Downloads
This transcript is from a CSIS event hosted on November 4, 2024. Watch the full video here.
Gregory C. Allen: Good afternoon. I’m Gregory Allen, the director of the Wadhwani AI Center here at the Center for Strategic and International Studies.
Today we’ve got an event that is a personal privilege for me. We’re talking with an individual who I have had the opportunity to admire, mostly from afar but every once in a while up close, for his career spanning AI and autonomy across so many different dimensions of the Department of Defense story. And that’s why he’s the perfect individual to cover our event today: “The Past, Present, and Future of DOD AI and Autonomy.”
Our guest is Dr. Will Roper, who is currently the CEO of Istari Digital; and was also previously the assistant secretary of the Air Force for acquisition, technology, and logistics; and before that was the founding director of the Strategic Capabilities Office, an organization based out of the Office of Secretary of Defense that has been at the heart of many interesting technological developments in the DOD ecosystem. He is also a member of the Defense Innovation Board, but we should note is here in a personal capacity and not representing the Department of Defense or any of his other affiliations.
Dr. Roper, thank you so much for coming to CSIS.
The Honorable Dr. Will Roper:
My pleasure, Greg. Wow, we’ve got a lot to cover –
Mr. Allen: Yes. (Laughs.)
Hon. Roper: – if that’s our topic for today, as it’s only an hour. Well, we’ll get through it all. And what a wild, strange trip it’s been.
Mr. Allen: So the DOD AI story we could start almost anywhere, right? You could go back to Bletchley Park and talk about the first digital programmable computers. You could talk about the DARPA autonomous vehicle Grand Challenge of 2004. But I really want to start the story where it starts for you. So where did you first enter the DOD AI and autonomy story?
Hon. Roper: Yeah, thank you for that. I did not want to go back through ENIAC and then bring history forward.
Mr. Allen: (Laughs.)
Hon. Roper: It started for me with Maven. I was in the Department of Defense working for Ash Carter then, really trying to rebuild the U.S. competitive strategy and war strategy for peer competitors, China and Russia. And we were on the lookout for any technology that could give advantage, but we didn’t have the time horizon of DARPA. We had to do things that were a lot closer to delivery. So I used to say we were doing the future of war with a lower case F, but not the future, the thing that’s going to be the technology that revolutionizes things, but a decade or more later. And I was reading more and more papers about deep learning and machine learning. And these weren’t terms we were using in the Pentagon. And there was a lot of, like, fatigue on this issue, because there had been a 1970s spree of automatic target recognition attempts with computers that were repeated in the ’80s and in the ’90s. And they all failed.
But I was doing a lot of research, talking with experts, reading what Google and Microsoft were publishing about their success in having computer vision type images for a variety of commercial –
Mr. Allen: So this is – this is the era of deep learning that sort of comes after the 2012 ImageNet breakthrough, and everybody is putting GPUs plus deep learning algorithms together and getting sort of jaw-dropping performance in the 2012 to 2015 era.
Hon. Roper: That’s right. Yeah, it was at the end of 2015 when I was convinced we need to pull this together into a pathfinder to prove that these same commercial algorithms could work for military missions. So I called the Program Maven, because we were wanting to show, maven’s an expert, that these AI algorithms could be as good as our experts. And I took in a $50 million pitch to the DMAG, the Deputies Management Action Group, right? It’s the investment committee of the Pentagon. And there were crossed arms around the table. You could tell, no one wanted to hear another automatic target something.
But Bob Work, to his credit, as the deputy said: I think we ought to try this. And approved the program, which became one of our fastest transitions in history because someone I soon got connected with was General Jack Shanahan, who was in charge of the ISR task force and so many other things in that portfolio. And he believed before we had even really begun the program. And so it, I think, is our fastest transition in history. And that just our putting together the decision brief for the POM and getting approval created a big believer in Jack. And he took Maven to the next level. But that was the beginning of the journey for me, was having to put together the first tutorial briefing on, you know, computer vision, and machine learning, and why it was different and not what we did in the ’90s, ’80s, and ’70s.
Mr. Allen: Right. Just to think about what we did in that timeframe, you know, we’d had algorithmic-based evaluation of sensor data, right? If you think about the type of algorithm that’s running on a Javelin. But that’s a handcrafted algorithm. Some human being typed in every line of code. And what makes the type of AI used in Maven interesting is that you’re not writing every line of code. You’re training a model based on training data. And for some things, not everything, you can get really big boosts in performance. And so that’s the sort of trend that you were interested in harnessing for the DOD.
Now this is so interesting to me because everybody thinks that Maven starts in April 2017, when then-Deputy Secretary of Defense Bob Work signed that memo creating the Algorithmic Warfare Cross-Functional Team. But that’s actually, like, the enlargement of Maven. The original birth of Maven, it sounds like, was at SCO, the Strategic Capabilities Office.
Hon. Roper: Yeah, where the name came from. And, I mean, the program that that I pushed for was a small, $50 million pathfinder to prove something.
Mr. Allen: And at its peak, I think Maven was like $500 million a year, yeah.
Hon. Roper: Yeah, it really became a huge initiative with over 3,000 people supporting it across the different branches of the service. So it was – it was an important learning time for the DOD to not just say we’re going to do AI, but to learn how to do it. And I think a lot of what we discovered during that period is we weren’t ready for AI because we didn’t have the equivalent of the internet in the military. But no, it started small. Almost everything in SCO was classified, and still is. But by the time it was signed out by Bob Work for the second time, it wasn’t a pathfinder anymore. It was the path to operationalization. And that included the intelligence community and all of the enterprise that Jack managed in the ISR task force.
That’s when – that’s when the accelerator really got hit for Maven. Normally it would take me three years to transition something, if not more. But thanks to Jack’s leadership, the accelerant got thrown on, and then we ultimately ended up hitting the wall in that program because we discovered you got to do a lot of infrastructure things first before you get to the fun of AI. So it’s kind of like we were – we were wanting to read the Odyssey, but we hadn’t taken the time to learn Greek. Well, first things first. Fundamentals first. And we didn’t have them in the DOD at that time.
Mr. Allen: Yeah. That’s so interesting. So Lieutenant General Shanahan was my former boss when I worked at the Joint Artificial Intelligence Center, which is how you and I first met in my time at the DOD. And I’m really interested, because if you go back to Bob Work, you know, the aha moment for him was the Defense Science Board’s Summer Study on Autonomy. And he always says it was not the Summer Study on AI; it was the Summer Study on Autonomy. And he was interested in AI because of what it would enable in autonomy.
But you were interested in increasing the productivity of analysts. And so, I’m curious, you know. Did you see Maven as relevant to the autonomy story? Were you already working on autonomy, or was that something that came later and separately?
Hon. Roper: Well, they were ideas that were birthed at the same time, because we were – we had a set of three tenets for what we thought future warfare would be like. The first was that all the domains of war would blur. You would not have armies fighting armies. This was before joint all-domain anything, before, you know, combined whatever – you know, whatever the terms are, this was before them.
And if you see a lot of the early programs, like the one – I think it was Secretary Carter announced here at CSIS – was like trying to reprogram Army weapons so that they could go after ships. Well, this was viewed as pretty controversial at the time.
Mr. Allen: Because there was an era in the DOD where strategy was like, yeah, our army is going to beat their army, and our air force is going to beat their air force, and our navy is going to beat their navy. And that’s the strategy.
Hon. Roper: So we viewed – yeah, everyone’s going to fight everyone because there’s such an advantage to that.
The second thing, which is where autonomy came in, is that we couldn’t just go fight with really expensive, exquisite things, because peers were rising. They would be able to match us with similar capabilities. So we needed to have things we could lose. We didn’t really have those in the military; so attritable systems, expendable systems, that would need to be networked with high-end systems – airplanes, ships and ground vehicles. So you get the benefit of the high-tech military tech, but also the ability to have sacrificial pawns on the chessboard that you can lose to ultimately win.
And then, finally, the third tenet was that data would be a strategic resource. It’d be the lifeblood of the military. We would be using it as – almost like a kind of ammo, that we would need to have more than the other side and be able to collect it during the battle so that we could train autonomy and AI, which is really where dominance would come from.
And so you put these three things together, it pretty much has come true in the department. These tenets have been stable. We see many of them starting to come into being in Ukraine. And I think now the work that is being done in the department and needs to be accelerated is getting the infrastructure completely done so we can go build an internetized military that can operationalize AI at war-relevant speeds. And right now we can’t. But there’s a new playbook to write there, which I’m sure we’ll talk about.
Mr. Allen: That’s great. So we here at CSIS, I and a colleague named Isaac Goldston, recently published a paper really about the collaborative combat aircraft program, both its history and its future. There, in fact, is the cover sheet from it.
But what I really want to borrow from this paper is a graphic that we had that depicts a timeline of many of the key precursor programs from the collaborative combat aircraft. And you’ll know all these programs because you worked on most or all of them.
So if we could go to the chart. So the years on the bottom here start in fiscal year 2015, which is right around the time that Bob Work then was talking about the third offset. And the programs – now, the years that you see here are based on the DOD budget books. So some of these things started a little earlier, ended a little earlier, changed names, et cetera.
But I want to hear from you. So we were just talking about the need for that high-low mix. Not everything needs to be an exquisite system. And that leads me to want to talk to you about the LCAAP, the Low-Cost Attritable Aircraft Technology Program, which I think was mostly AFRL but is sort of the ancestor of Skyborg, which you touched both in SCO and then later in the Air Force.
So what were you up to during the LCAAP era?
Hon. Roper: A lot during that era.
Mr. Allen: (Laughs.)
Hon. Roper: So, you know, the – went into the budget in ’17. We started working on Low-Cost Attritable Aircraft ideas about a year and a half earlier. And what we were hoping to do was to riff off of an idea that had been floating around for a long time, which was the loyal wingman –
Mr. Allen: Wingman, yes. Yeah.
Hon. Roper: – which it sounds great to say, but when you look at the future battlefield, didn’t appear that needed. If you happened to have something that was low-cost and autonomous, you wouldn’t want it flying beside you. You wanted it ahead of you, taking on the risk and taking on the jobs that are too dangerous for a person to do. And, of course, if it’s cheaper than what they’re threatening, you’re now doing a cost-of-position strategy, which I always thought was a good idea.
So we saw within – with what DARPA was doing and what AFRL was doing the ability to pull those together and make an autonomous system that wasn’t flying near the piloted system. It’s flying ahead of it, and you could optimize for a variety of different roles, doing early detection of threats or maybe being a weapons truck and you would keep it as inexpensive as possible, and we definitely saw a need to grow the industrial base there.
We weren’t telling industry build something that is expensive enough to matter but not so expensive that I have to lose, and then we viewed with the investments that were being made in F-35 and next-generation air dominance that if you could connect to those attritable systems as information gatherers, as weapons carriers, that you could do so much more with piloted systems quarterbacking them than you could having the human be a pilot.
Mr. Allen: So I think this is really interesting and you used the phrase cost imposition strategy, and so folks who know your career know that you were in the missile defense world and usually we’re on the other side of that equation. Missile defense is an area where by nature the missiles that are taking out other missiles are almost always more sophisticated, more expensive, than the missiles they’re taking down.
I saw an analysis by Matt MacGregor and Pete Modigliani, for example, recently that said the Iranian missile attack against Israel, if it hadn’t been the case that half the Iranian missiles had failed, you know, they would have had a two-to-one cost imposition strategy and if they had used more Shaheds and had a better high-low mix the Iranian cost imposition strategy could have been as good as eight to one. So that’s what our adversaries have been trying to do to us but you saw an opportunity for us to do it to them.
Hon. Roper: It’s playing red is always better than trying to defend blue and that was a common statement is that in SCO we would think red. We would not try to solve the problems that an adversary was handing to us. They were their problems they wanted us to have.
There was such a reflexive attitude in the department. As there’s a new threat let’s beat the threat. Well, that’s playing to their game. When you get faced with a new threat the question should be what can I do that gives me the ability to not just continue doing the mission but do it on my terms, on my playing board, and I found that that wasn’t thinking that happened anywhere in the Pentagon.
We built a budget thinking entirely blue. So the interest for me in programs like Avatar was if we could make something inexpensive enough that we could afford to lose them, that force multiplied all of our piloted fighters and then we also introduced the ambiguity that that little thing that’s on your radar screen, oh, adversary, maybe that’s an Avatar that’s sensor loaded or maybe it’s a weapons load or maybe it’s actually not an Avatar but it’s really one of our F-35s and they all look the same.
But now you have to plan for the worst case scenario. That thinking red, thinking from their side, led to better solutions where we weren’t taking the bait and responding symmetrically. So we used the term asymmetric. We want to respond orthogonally. Not in the direction we’re being pulled –
Mr. Allen: Yeah.
Hon. Roper: – but a different direction. And, you know, I think Avatar was really controversial early on with the Air Force.
Mr. Allen: So I think – let’s return to our timeline here. So you at least in public were first talking about Avatar before Congress around the 2016 timeframe. So recognizing that many aspects of this program are classified, to the extent that you’re able to share what was the sort of original idea and how did it evolve over time? And if there’s also a relationship here with Skyborg that if you could disentangle that for our audience would be great.
Hon. Roper: Yeah. There’s a funny history here is that originally the Avatar program was called Skyborg when we first started talking about it in 2016.
Mr. Allen: At SCO.
Hon. Roper: At SCO. Public affairs was not exactly comfortable with the name Skyborg so we changed it to Avatar because, I don’t know – and, you know, cyborg – but we like the idea of, like, a cyborg does have a human component.
But Avatar was just as good and the genesis of the idea was that it was no longer – it was no longer acceptable to co-locate sensors, shooters, and decision makers in the air anymore, that we were losing the OODA loop, to use the Air Force’s favorite phrase – observe, orient, decide, and act – but we were also losing the geometric game by having everything together. That diversifying in the air similar to what the Navy did for its distributed fires necessarily to defeat cruise missiles that that would be needed in the air and that we didn’t want to do the distribution with just expensive things. We wanted to introduce attritable things, which we didn’t have in the Navy or the Army, but we created programs to do just that in SCO. That was the genesis. And then what became controversial was that if you start pushing these, let’s call them, attritable scouts and attritable weapons trucks forward, that you get different air warfare strategies than you would with a solo platform where everything’s collocated. And anytime you start rocking the boat, where you’ve got to retrain, rethink, re-equip, the responses of the bureaucracy is usually, this is a bad idea. Let’s not do it.
Mr. Allen: So, I mean, just to make sure I understand you correctly, it’s like you thought you were introducing an interesting capability, but the bureaucracy came back and said, actually, what you’re asking for is a different doctrine.
Hon. Roper: Yeah. This is a bad idea, Will.
Mr. Allen: (Laughs.)
Hon. Roper: This was always – (inaudible). It was a bad idea.
Mr. Allen: Yeah.
Hon. Roper: I was not well-liked initially by many senior officials in the Air Force, but had enough support that I was able to get the program started.
Mr. Allen: And when SCO began Avatar, was it both hardware and software? Only software? How did all that work?
Hon. Roper: All the things together. I mean, most SCO programs began with, we’ve got to build – we’ve got to raise the industrial base so that they can be made for a service. And we also have to prove the point, because why will a service POM for it. So we wanted to encourage focus from industry on these systems that are somewhere between a weapon and an airplane. More of an attritable – more of a reusable weapon than an attritable airplane was our interest. The closer you got to an airplane the less we liked them. We liked them when they were really cheap.
Mr. Allen: Because the LCAAT original cost point was $3 million per platform, I think.
Hon. Roper: Yeah. And we thought that you could improve over that.
Mr. Allen: Doing better than $3 million?
Hon. Roper: Just we were doing a laboratory program. And if you encouraged real innovation in the industrial base, you know, we’re not asking for something that’s groundbreaking in aviation. We’re asking for something that’s more general aviation class of costs, not military aviation. And then the ambiguity about what payloads it’s carrying, that’s part of the cost imposition we wanted to do. Is that you don’t know what you’re fighting, because we’ve got a Swiss Army drone of options, and you’ve got to prepare for the worst case.
But it was controversial because it took away from, in some views, the mystique of the pilot and the white scarf and everything being needed being in their brain, and really brought in that warfare is going to increasingly go digital. And AI is going to be making more and more decisions. And the OODA loop is going to eventually become a knot that’s so small there can’t be a human inside of it. So let’s go ahead and get on that bandwagon and build an aerial architecture that makes sense. And what made sense, if you’re fighting a data war, an algorithm war, is you need to have a class of aircraft that are perfectly suited to collect that data, hazardous though it – though it be. And Avatar seemed perfect for that.
So it had both the hardware and the software. And our hope was that we would have all these attritable systems that a pilot in an F-35, or something else in the future, could simply connect with. And they could quarterback – kind of like “Ender’s Game.” You know, like, we’re going to quarterback the team. And the purpose of that play may be to shoot down an aircraft, or it may be to take out a ship. But it could be just to fly into harm’s way and collect data so that we can retrain the algorithms that are currently denied on the battlefield. We just saw a lot of utility.
Mr. Allen: And I should say, I’m skipping a little bit ahead to the CCA conversation which I mostly want to leave for that part of the conversation. But one thing I should say is there was a lot of skepticism in the pilot community. They’re, like, look, I’ve got to manage my own aircraft. It’s already a full-time job. And now you’re telling me I have to, like, be in charge of six to eight loyal wingmen? But what I’ve heard is that they’re already running simulator exercises with this and the pilots love it, which was, like, a delightful surprise for me.
Hon. Roper: There was a big moment where I was so proud of the Air Force on this. Because I was – now I’m not in SCO. I’m running acquisition for the Air Force.
Mr. Allen: Wait, I think this is really important to say, right? So you’re in SCO, you’re trying to support this Avatar program – which is controversial among the Air Force brass.
Hon. Roper: Some. Some very supportive.
Mr. Allen: You’ve got your opponents. And now you’re in charge of those opponents, because you’re the head of Acquisition, Technology, and Logistics. That worked out well, I should say.
Hon. Roper: It did. And the whole reason I was there was because Chief Goldstein introduced me to Heather Wilson. It was very helpful to have the enterprise know that such a well-respected chief as General Goldstein wanted some disruption to be brought in. But there was a – there was a moment where we were creating the Skyborg Program the second time, right? So I reused the name.
Mr. Allen: Yeah, yeah. And now you don’t have the same communications people blocking you anymore, so you can finally go back to your favorite name.
Hon. Roper: I could go back. But it was clear we needed to have some focus on the brains, right? That we wouldn’t want that to be coupled to the aircraft. We’d want to be able to share it. And that was a program that we gave to the research lab. Well, also a little controversial. It was viewed negatively at that point not because it wasn’t clear that it was needed, but it was viewed that it might hurt pilot culture. And there was a meeting that I had with, I think, all the four-stars, and I had a lot of young pilots who wanted to come support the program. And there was a Raptor pilot there who was wanting to quit flying Raptors to come train AI to fly a jet. And of course, the four-stars looked and said, you’ve got the best job in the Air Force; you fly Raptors. Like, why do you want to go do this? And the pilot gave a great answer. I wish I could have that Hollywood moment recorded. He said: I joined the Air Force to do things that have never been done before. And it clicked, right? This is the new domain. We’ll still have human pilots for the foreseeable future. AI is going to be very dupable, and there will be a there there for having a human in the loop at least for the next chapter of history. But people join organizations like the Air Force and Space Force to break boundaries, and the service almost forgot that.
And so it really rebooted the initiative with tons of support, and almost every major command started their own AI-related thing, embracing that change would come but let’s be agents of that change, not, like, pulled along in the wake of someone else’s change forced upon us, especially if it’s an – (laughs) – if it’s an adversary like China that’s doing the forcing.
Mr. Allen: So, on the – on the one hand, there’s, like, a pretty consistent through line between Avatar and LCAAT and Skyborg. But I’m curious, you know, what was really accomplished in the Skyborg phase that sort of differentiated it from those two predecessors?
Hon. Roper: Really getting the software completed and the autonomy work could be shared across systems. And also, like, how the networking should go. So –
Mr. Allen: When you say “shared across systems,” you mean, like, this is not the F-16 autonomy; this is the Air Force autonomy software suite.
Hon. Roper: Correct. Yeah.
Mr. Allen: Yeah.
Hon. Roper: We thought that the attritable aircraft would be bought kind of like weapons, where, you know, you’d be buying them on a flowing production line and attriting them out of inventory with some surge capacity, and that there might be more than one class of system that was appealing. But you wouldn’t want to have your networking software and your collaboration software for swarming be something that was – that was bespoke to one class of vehicle. So I think, wisely, it was separated out, the team did a great job, and it transitioned. So it’s an example of actually getting something out of the labs and getting it into the field.
Mr. Allen: Yes.
Hon. Roper: But it’s because there was a really great team on it, and it’s because it got that endorsement as a vanguard program in the Air Force. It was a program we created so that the chief of staff of the Air Force could look at things going in the lab and say this is what we need, and Skyborg was one of those things. So if we did a little more prioritizing and saying what’s more important than other things, maybe more would transition.
And you know, transition was the thing that I really had to work the hardest on at SCO. There are a thousand reasons not to transition. But if you stay in sight in mind and you do things that are needed to retire risk, you can transition across the valley of death. It’s just – it’s just it’s a – it’s a labor. And I would say a labor of love, but sometimes it’s not a labor of love; it’s just a labor, and you’ve got to be willing to put in the work.
Mr. Allen: So I feel like this chart illustrates CCA beating the valley of death.
Hon. Roper: (Laughs.)
Mr. Allen: Because what we did is we looked at all the combined budgets of those CCA precursor programs.
Hon. Roper: That’s a neat graph.
Mr. Allen: And what you can see is that CCA is going to spend more over the next two years than all those predecessor programs spent over the preceding 10, and then it’s going to increase 5X after that.
Hon. Roper: (Laughs.)
Mr. Allen: So, to me, this is Skyborg, Avatar, LCAAT beating the valley of death. This is now a real program of record backed by real dollars. They’ve already got budget even for the .mil PF, so all the sort of surrounding bureaucratic and programmatic infrastructure is also, you know, being funded. They think this is the future of airpower in a big way. So congratulations. I feel like the chart sort of says your dream came true, at least one of your many dreams came true.
So I’m curious, you know, what’s your take on the Collaborative Combat Aircraft Program? Because when you left, it was still Skyborg, but you had already sort of set in motion some of the plans for NGAD. CCA is oftentimes called a component of NGAD. So I’m just curious; you know, what’s your take?
Hon. Roper: I mean, it’s needed.
Mr. Allen: (Laughs.)
Hon. Roper: It’s been needed since 2016, so I’m delighted it’s getting its chance to really scale. And there will be a lot of interesting work not just in the technology, but the doctrine and the training. It’s going to challenge a lot of the existing Air Force, and I’m certainly rooting them on.
And you know, I think the – you know, I think it’s wise to break it out and put a focus, because when I was still working with NGAD it was – the budget was slashed every year. We hadn’t even said anything about it. Like, are we even building airplanes? Like, we weren’t saying even that.
Mr. Allen: (Laughs.)
Hon. Roper: And that’s not wise when your budget’s being cut to not have anyone know what are you actually doing. And this was an important component. It was taking the idea of having a high-end aircraft that’s quarterbacking attritable aircraft, but rather than do it as a pickup game with what we had in inventory – which is what we were doing with Skyborg – the hope was to do it with things that were purpose-built for this paradigm, that could really unlock it, with real money – which is great to see – to encourage the industrial base to invest.
And then the place that we thought the analysis would get the most sensitive is on the price point, because we don’t typically design things in acquisition based on inflection points; we design things based on capability. It either passes or fails. But for something like a CCA, you’re really looking for an inflection point where, yes, you could put more money into the program, but you’re getting back a disproportionate amount of return in terms of value. You make them too expensive, they’re not really expendable anymore. If you make them too cheap, they’re not really credible.
Mr. Allen: Yeah.
Hon. Roper: So it’s finding the balance. And that’s a different thing than I’ve seen in a program.
And I think the thing that’ll be really important is how you make them, because this is another thing the acquisition system doesn’t do. It thinks about the process, the factory, after the thing has been designed, and often we don’t like the results. We end up with something that’s way too expensive, or has a bespoke supply chain, or unique tooling, or artisan craftsmen that you can’t scale. And for CCA to have the impact on the battlefield that was originally hoped for, it’s got to be made in a very different way than the high-touch labor, high-cost production of current defense programs.
So I almost feel like you got to – you got to take the approach that Elon recommended, where he said the factory’s the product. He said the factory is the product here, and then you build the best CCA for the inflection point with it. That is a 180. Maybe that – that is orthogonal to our acquisition process. But if you look –
Mr. Allen: So you’ve anticipated my transition point here. So what I’ve got here, this is the actual chart from Norm Augustine’s 1979 paper.
Hon. Roper: Oh, yeah, yeah, yeah, yeah, yeah, yeah.
Mr. Allen: This is the original study that bored out what is now commonly known as Augustine’s Law. And so you can see he has cost data in then-year dollars all the way back to the Wright Model A.
Hon. Roper: (Laughs.)
Mr. Allen: And what he shows is that for fighter aircraft in the United States Air Force, roughly the price goes up by tenfold every 20 years. And he wrote this in 1979. And at the time he made – or, a few years later he wrote a book, and in that book he made a very famous prediction which I’m just going to read the Norm Augustine quote, which I think is so good. This is in 1985: “In the year 2054, the entire defense budget will purchase just one aircraft. This aircraft will have to be shared by the Air Force and the Navy, three-and-a-half days each per week.” And then – so, as you can see, this is the unit cost of aircraft intersecting eventually with the entire Air Force budget in the year 2054, and then later with U.S. GDP. And that’s, obviously, not how you win wars, right, by just buying one aircraft.
One thing. He said that prediction in 1985, and in 2015 he wrote a piece where he said: We are right on track. Nothing has changed.
Hon. Roper: (Laughs.)
Mr. Allen: And I think that is such a remarkable development.
So you see autonomous aircraft as also part of this attritable phenomenon, which hopefully, right, can break the cost-curve trajectory that we’re on that Norm Augustine identified all those years ago.
Hon. Roper: Well, I remember the first time I heard of Augustine’s Law, and laughing, and then thinking, oh, this is not funny because there’s some real truth to this. And I’ve got some different thoughts on this, especially since leaving the government, because building a system that’s less complex is a great step. Don’t make it harder than it needs to be. But what’s clicked for me about the spiraling cost of aviation are a couple of things.
The first – the first is on us. We only build new things generationally, so there is no compounding learning that we’ve gotten. And then when we decide to build something, we make them as complex as we can possibly dream of at the time because we know we won’t build another one, another –
Mr. Allen: Right? Catch the train before it leaves the station. Put all your fancy stuff on before it’s too late.
Hon. Roper: So you get a ton of complexity injected. And every year – I don’t know if you notice – the bureaucracy of the Pentagon for overseeing programs does not get smaller.
Mr. Allen: (Laughs.)
Hon. Roper: It gets worse, right? I think, like, hundreds of pages are added to the FAR every year. So the complexity’s going up and the bureaucracy is going up. But unlike the world of software, we don’t have a technology that manages complexity well in aviation.
If we’re in Google right now, Google’s running 150 million unit tests to self-certify that it passes a whole bunch of different standards. And so they get the benefit that they self-certify; that’s not going to be true in aviation. And they’ve been able to create a technical approach to taking amazing complexity and automating it. Well, we have complexities that are – that are in addition to the world of software. We have military standards, and air worthiness, and things of that nature. Which, right now, those compliance steps are not unit tests of software. They’re in documents that organizations own. The process is bureaucracy, which is slow, cumbersome, and expensive.
And so since leaving I’ve realized that compliance is a thing that if we could make it like Google’s for aircraft, then we could have whatever complexity’s needed to win the war, but we could squash it in terms of what it what it cost us in terms of time and money by turning it into software, as opposed to turning it into documents that people read and ultimately sign a sheet of paper saying “you are compliant with this.” And I feel pretty strongly that if that happened, that – aside from not making points more complicated than they need to be, and not building them generationally – you now have a technical solution to start bending the cost curve down.
Mr. Allen: So it’s interesting that you say, you know, bending the cost curve down, because DARPA did a really interesting study. It’s called the META study from 2010. And this is back in an era where people were saying, you know, complexity is the problem. It’s just that these aircraft are so complex. But they did a really interesting analysis where they looked at cost growth in other industries, so in the automobile industry and in the computer industry. And what they showed is that, like, look the aircraft today, unambiguously better than the aircraft a long time ago. But cars are also better and computer chips are also a lot better.
But what’s different about government aircraft and cars and computer chips is they’re cheaper or the same price in inflation-adjusted terms than they were all those years ago, even though they’re so much more complex. And it’s not even an aerospace disease, because I actually at one point got cost data from Intelsat, which for a long time was the world’s largest commercial satellite operator. And I had what they paid for a satellite in 1985, 1995, 2005, 2015. And in inflation-adjusted terms, it was always between $225 and $270 million. But the performance of those satellites went through the roof. I mean, it was an exponential increase in performance, with no appreciable increase in cost.
And so there’s something going on in the government aerospace sector that is different than what’s going on in the rest of the American economy. And there’s a lot of reasons for that. But I do want to come back to, you know, what you just said, which is there’s this community that says, the way that we reduce cost is by reducing complexity. But there’s this other part of the American industrial base where those two do not have to go hand in hand.
Hon. Roper: Yeah. I believe – I saw the indications of this when I was the Air Force and Space Force acquisition exec, when I got introduced to Formula One racing. Because they were building race cars like software. And if they could do it for a physical car, then it was reasonable to believe that we could do the same for aircraft. Now they have some benefits on certification, that you self-certify many things and then get audited. So that’s a difference. Like, we’ve got a multiparty system whereas organizations like Google have a single party certification system. And then they had begun being able to automate compliances, which we had never done in aviation. And when we attempted to do that on key programs, it was really challenging for us because of the decentralized nature.
Mr. Allen: And I think Kessel Run has a lot of success in automating cybersecurity compliance, which you were intimately involved with.
Hon. Roper: Absolutely. I mean, we could – we could lift and shift industry’s playbook for software, and apply it to us, and just get it military certified for whatever. But we needed a completely different playbook if we were going to do this for physical systems. And we attempted it in several pathfinders and really found that the things that made us different from Formula One is that a Formula One team more or less owns all of its IP. It can put it all on one network. It doesn’t have levels of classification. It puts all of its tools there. And then they have the laborious process of trying to knit their tools together into an ecosystem that can represent that car digitally, even aspects of its certification process.
Contrast that in aviation. You have OEM and supplier relationships that are very complex in terms of the IP that they will or will not share. Then you have the government as the certifier, so you’ve got to throw information over the wall in order to get anything approved. And then add layers of classification, where data gets trapped and will never come back down. Radically different, with no technical solutions to make easier. And then if you add to that complexity of systems going up, complexity of bureaucracy going up, it’s amazing we can make anything.
Mr. Allen: Yeah, right?
Hon. Roper: Yet, we do because of amazing men and women who get the job done no matter what.
Mr. Allen: But there also is this – you know, you talked about the existence proof of what goes on in Formula One. But there’s also this existence proof in the government aerospace sector, which is SpaceX. So NASA, because they foot the bill for most of the development of the Falcon 9 launch vehicle and the Dragon cargo capsule, they got to audit SpaceX’s financials. And there’s this fascinating analysis that the Office of the Chief Financial Officer at NASA released in 2011. And according to their audited financials of SpaceX, the entire development cost of everything, every dollar that SpaceX paid for the Falcon 1 launch vehicle and the Falcon 9 launch vehicle through first launch, was only $400 million.
And then folks said, well, we have cost-estimating tools. We have NAFCOM, which both NASA and the Air Force use to estimate the development cost of a vehicle. And they said what would our models tell us it should cost to develop this launch vehicle that already exists in the real world and only costs $400 million, and their model said it should cost $4 billion? So we’re talking literally a 90 percent reduction in cost that is not something like fantasy that we’re dreaming about. It’s right there. This exists. This can be done.
And I kind of always wondered, how did anything not change immediately in the DOD? How does not every government aerospace program immediately sort of say, like, what is the postmortem of this success story? How do we do that for our efforts?
Hon. Roper: Oh, we could go a whole ‘nother livestream on that. There’s a lot to be said about the gist. No matter how much it talks about taking risk, the government is not a risk-embracing culture. And what made SpaceX and many other innovative companies so much faster, so much cheaper, is that they got to credible feedback data earlier, which helped them improve things, and ultimately took time out.
If you think about, like, a DOD program, it assumes success. That’s the dumbest thing you can assume when you’re creating something new. You should assume failure and have a process that can rapidly incorporate, learn and improve.
So that was the thing that never got into the DNA. I tried the best I could to take risk and encourage it. But you felt in the woodwork around you in the Pentagon like somehow, in the very DNA of that building, was this – you know, this belief that taking risk ends careers.
Secondly, there’s urgency there when you’re running a business. You get competitive forces outside the government that are difficult to recreate inside.
And thirdly, probably not as widely known, SpaceX did a lot of work investing in itself, in automating its own internal processes, so that there were people who were working for the customer, but there were also people in SpaceX working for SpaceX to make the entire enterprise together. In fact, the programmers who were in SpaceX, working for SpaceX, were among the most talented they had because everything they did made everyone better, like a tide raising all boats.
That was a big inspiration for me, because when I saw what they were doing, it led me to believe that we could ultimately have a lot of what happens in the bureaucracy – checking mil standards, checking integration, checking a thousand things – that we could eventually turn that into automated software.
And one of the fun things about being outside the government now is I can finish things a lot easier, in some cases, than I could inside; and now attempting, with the Air Force and Lockheed Martin Skunk Works, to pass an airworthiness assessment digitally, which is an example of a compliance that you’d normally give a document that represents some kind of aircraft system. And then some human will take it and read it and then determine whether you pass. In the Google world, that would be done in, like, a fraction of a second, because every single thing in the document would have a test associated with it. So the –
Mr. Allen: I think we have to stop here, because for anybody in the audience who doesn’t understand how big of a deal an airworthiness assessment is, I mean, the traditional paradigm in the DOD software universe is once it passes an airworthiness certification, that was such a painful and lengthy process that you’re literally never allowed to change anything ever again, because anything you change would violate our airworthiness certification.
And so you get this sort of moment where literally, when we’re building certain planes, they are taking semiconductors out of argon-filled bags because they don’t make those chips anymore, so they bought them in the ‘80s or whenever and, like, it’s running the same software they wrote in the ‘80s because that’s when we got our airworthiness certification.
So what you’re talking about, the ability to pass an airworthiness certification many, many, many times because of, you know, digital modeling and simulation and digital engineering, that would make everyone’s life so, so much better.
Hon. Roper: And bring in more innovation. The paradigm you discussed, which is real – we used to – especially in a lot of the critical classified programs, we hadn’t brought a new technology in in a decade for fear of busting certification. So your point, spot on, is that the way the process is now, it discourages any innovation, right? You’re living in the past because you fear the cert.
Now, go to the world of Google. They have a paradigm where they change every day, every hour. And it doesn’t add any additional cost, you know, or time to their system because they’ve built – they’ve built an approach that turns the complexity of cert into automation. And that was something that I saw early on in SpaceX, the benefit to the enterprise if you’re trying to automate what the enterprise does. And if that eventually happens for, say, all the mil standards and interface standards and IEEE standards and NIST standards and security standards, well, all those are documents where we write down this is the pass/fail criteria.
So, of course, it could all be software. What happens when the DOD is 150 million unit tests per day that tells someone designing an airplane that doesn’t even exist yet how likely it is to pass an air worthiness assessment and when and everything else that you would normally get years later in the bureaucracy from human APIs.
Mr. Allen: So now we have to – now we have to bring this conversation back to CCA. So Air Force Secretary Frank Kendall has cited during his recent congressional testimony that he thinks the cost of a CCA could be ballpark one-third that of an F-35. So the stated fly away cost of an F-35 is between 80 (million dollars) and $100 million, so we’re talking something like 25 (million dollars) to $30 million per CCA aircraft.
The LCAAT Program, which you were a part of, was originally anticipating a unit cost of $3 million. And, as you said, there is this risk of over engineering these systems or being too attracted to sort of exquisite performance when really what we need is numbers of aircraft with good enough performance.
And I’m just curious, you know, what do you – do you have any concerns about the way that the CCA program might evolve?
Hon. Roper: Well, I’m – I mean, I’m not inside the program so I leave it, you know, to those who are running it. I’m glad that there’s emphasis on this. It’s nice to see a service going a different direction not just with its laboratory investments but with real money, and there’s just a propensity that you always have to worry about in DOD which is requirements creep and cost creep and thinking about the product before the process.
And when I’m with industries outside of defense they think the process first. If the process isn’t scalable, if it doesn’t produce rate and cost that you like then you’ve got no go to market, right? You could just build something that theoretically exists, and I think that’s where the DOD is ripe to try something different where the process is designed before the product, meaning if we’re going to need a lot of these systems then that means that the way they’re built, the supply chain they use, the tooling that they use, how automated all of this is, do I need, like, touch labor that takes five years to train or can I have robots put it together – all of that will go to the bottom line of cost and also rate because we’ll need that in a war. That’s a lesson from Ukraine, right, is that rate matters.
Mr. Allen: Yeah. I should talk about some of the analysis of my colleagues who – you know, I’m not a great wargamer by any means but some of my colleagues in the CSIS building are some of the best wargamers in the world and in some of the analysis that we’ve done of a potential conflict between the United States and China in a Taiwan Strait scenario for a lot of the munitions that we know and love we’re running out of those in a week, like, the first week of the conflict.
And similarly, you know, the aircraft that we’re talking about right now, the F-35, well, that industrial base is sized to produce 150 aircraft per year. So if we’re in a shooting war with China or with Russia and taking real losses we’re many years away from being able to replace that. You contrast that with, for example, the World War II industrial base which built I think it was 96,000 aircraft in 1944 alone.
Hon. Roper: Oh. Yeah.
Mr. Allen: Yeah. So we’ve really moved – you know, on Augustine’s Law, we’re going down the expensive, exquisite curve, but we’re also going down to these really low production volumes.
Hon. Roper: Well –
Mr. Allen: In the NGAD, they’re talking about only 200 for the penetrate and strike component of NGAD.
Hon. Roper: I mean, your World War II analogy is worth polling. Like, why were they able to hit that scale is that they had the factories and the workforces and the tooling at the beginning of the war.
Mr. Allen: Literally, Ford Motor Company becomes Ford Aerospace. Yeah.
Hon. Roper: So you design for that factory because designing a better airplane that you can’t build on that factory has no value in a war. And so I really think the factory is the product now and that we’re going to see one class of military programs that does focus on product because there’s a unique advantage that we think could be ours and we’ll invest in that and we’ll give up rate and cost to achieve it.
But more increasingly, for weapons and for drones of all kind, I think the factory is going to be superlative to the things it makes, and that programs in the future will design for factories, and the big decision is whether to modernize. And the things that you’ll wargame in the future is can we go to war with one production site, or do we have to have more than one, for a variety of reasons? And that thinking is alive and well in industry. It’s what made Formula One amazing, is that their process is how they win the season. They can’t win with a single race car. No team can. You have to build a whole bunch of different race cars. Process wins the season, not the product. But that thinking has not become operative in a defense program. And it should. It’s good thinking for what we’ll need in a conflict.
Mr. Allen: Yeah. So you’re talking about this digital engineering paradigm. You’ve been inspired by what goes on in Formula One. You’ve been inspired by what goes on in competitive sailing, which, you know, for folks who don’t know, you think sailing is, like, an old-timey sport, but it’s ridiculous how much technology is involved in creating these boats, designing these boats, changing these boats. So I’m curious, you know, how does that relate to cost? And I want to read a quote from Dr. Doug Meador, who served as deputy program manager of AFRL’s LCAAT Program. So this is the early days of thinking about attritable aircraft.
And he stated, quote, “Traditional Air Force fighter cost models, if we use them in their current state, would actually make these aircraft very expensive. If we’re going to bend the cost curve we have to bend the cost model with it. One of the best predictors of aircraft cost is weight. And with the current cost models we have, we’re essentially restricted to the only path to cheaper is to get lighter.” So that just seems like we’re starting in a really bad place, where we’re sort of saying, OK, the F-35 costs this many dollars per pound, and this aircraft is going to cost that many pounds, and so that’s how we’re going to size this program.
But, you know, as SpaceX has demonstrated, as Formula One has demonstrated, there’s this other universe of the relationship between engineering, and cost prediction, and cost real world outcomes. And so I’m curious sort of, you know, what you see as the art of the possible here, if we’re going to move past these older cost modeling approaches.
Hon. Roper: Yeah, I’ve came up many times – like how – when you’re getting a cost estimate that’s based on the past, so does that mean you’ll never have a breakthrough, right? (Laughter.) That’s basically what the world tells you. And, of course, we will have breakthroughs. And Formula One shows you an example. Of course, they didn’t decide to digitize everything because, oh, it’s innovation and that’s a buzzword. It’s what they have to do to win. It’s not digital engineering to them. It’s just engineering. And if you don’t do it, you don’t make it to race day.
So they’re still a really good sight picture for the DOD. Teams are building over 1,000 digital twins per race. They design for every contingency possible. They wait to the last moment to pick the digital twin that becomes the physical twin. It’s the best fit for race day. And then once they make it, it’s instrumented and feeds data back.
Mr. Allen: And I think it’s worth pointing out, right, we’ve had modeling and simulation. We’ve had computer-based systems engineering for decades. But this is, like, something – this is a step change, right?
Hon. Roper: Yeah, same change from the ATR programs of the ‘70s, ‘80s, ‘90s to computer vision and AI today. There was a step change that happened because computers have gotten faster. And because they’ve gotten faster and we produce a lot more data, models and simulations have gotten better. Most of the time you pivot to AI when you talk about fast compute and lots of data. AI is the punch line. Models and sims are also the punch line. They’ve gotten better to the point that they’re realistic, they’re decisionable. In Formula One they are. Any of those 1,000 digital twins could be the race day car. And when they have to make the race day car, it’s feeding back a lot of data to the digital twin, which is simulating faster than real time, and feeding race day strategies back. So the digital and physical coevolve.
Well, that seems like a great paradigm to have for airplanes, and drone ships, and everything else. So I still think it’s the right picture. The thing that’s going to be the challenge for DOD is that they can centralize their infrastructure, and we can’t. That’s been the problem I’ve been trying to solve in the private sector is we’re going to need a different approach to creating that digital magic. But if we do, we should expect the exact same digital magic.
Mr. Allen: When you say that it’s different in the DOD, does this relate to intellectual property ownership? Or does it relate to who’s performing what task? I mean, one of the things that I always thought was really interesting is that in the DevSecOps revolution, which you obviously touched being involved in so many of the Air Force software modernization initiatives, I was really shocked to learn that companies like Target, you know, the mass market retailer, have massive software engineering workforces. Because they sort of recognize that if we’re going to pull this off, we need to be really good at software even just to be good at retail.
And it’s not only because of e-commerce. So many other companies that you don’t think of as software companies – like JPMorgan, the bank, for example – now have massive software engineering workforces. And that’s just because that’s what awesome performance demands. And so I’m curious, you know, when you think about the challenges that the DOD faces in replicating this Formula One model that you’re so excited about, how much of that is IP? How much of that is workforce? You know, what’s the solution?
Hon. Roper: So it’s – you know, every company is a software company if you’re outside of aerospace. We’re the ones trying to catch up. And you’re exactly right. The issue is that even if we invested in those software developers, the way that we make decisions with data is by aggregating it together on a network. And that’s really challenging. If it’s two sources of IP, I got to have lots of contractual protections and, like, IT protections, because I’m at risk. And then when you add classification, oh, that has its own challenges. And then you also add the builder is not the certifier, so there has to be this trusted handshake. It looks nothing like Formula One.
The only way we’ve been able to solve that is whatever we’re collaborating on, I’ll make a copy of what I’m working on, you make a copy, we’ll exchange, we’ll open up each other’s copies, and we’ll check things. So unlike the world of Target, or Formula One, we live in a world where humans are the APIs and documents are the compliance steps. And the thing I’m trying to solve is getting the humans out, where data can go digital to digital, even across boundaries, and where we can replace the documents with automated unit tests, just like Google. And if we can do that, we won’t be – we’ll just be bringing the Formula One paradigm, but now we can do it in a decentralized way that doesn’t require making all these copies, which is risky from a cyber perspective. It’s risky legally. And it’s also just slow, because humans are the interstitial tissue.
And finally, we could just go digital to digital. And we should expect exactly the same Formula One-like results if we start gearing programs around this kind of innovation. And I think it’s going to be absolutely necessary to operationalize AI. That paradigm where the digital and physical twin are connected, you can imagine that will be needed for CCA. You got all these CCAs that are being attrited. They’re sending data back, right? We’re each trying to spoof or jam each other’s AI – like, we’re actually doing algorithmic warfare, where there are new techniques like jamming AI, or spoofing AI, or algorithmic camouflage. I think all these things will be real. And each side is going to learn what the other is doing and retrain to overcome it.
Well, without the digital and physical conjoined, how will we do that? We may have to do that, like, on a minute-by-minute basis in a far future. And our system’s completely not ready for that. I’d like to just see us be able to do a new software turn in a day. And if you look at the war in Ukraine, and I’ve been there several times now, their cycle times, which are a couple of weeks, could we even attempt that in DOD? So aside from the process mattering, you need a metric to know, is your process good? And aside from rate and cost, cycle time would be the third. Give me rate, cost, and cycle time and, without knowing what you’re building, I probably know if you’re in a winning position or not.
Mr. Allen: So in the past, you know, you’ve said that you were one of the earliest folks in the DOD who was given the job of the China fight. And now that is the organizing logic for so much of the DOD. And I want to sort of give the scary case here, because, you know, if you go back to 2010, I remember the line about China was: They can’t innovate, they can only copy. And then you fast forward, you know, 10 years, and you’ve got folks like Mark Zuckerberg saying, hey, what we really want to do is copy WeChat, this Chinese social media company that has all these innovative features. Fast forward a bit more, and you’ve got the CEO of Ford who’s saying our goal is to, like, produce to Chinese standards of quality when it comes to automotive. And in the most recent Beijing Auto Show, they showed some really impressive autonomy capabilities.
And so I think when you – when you talk about offsets, there was this theory of, OK, stealth is a really lovely kind of offset kind of technology because it’s incredibly hard and we’ve got a lot more money and we’ve got a lot more Ph.D.s, and so we’re going to be better suited to adopting this sort of technology. And that is the nature of the competition. But if the nature of the competition shifts towards larger numbers of systems, and China is really good at building large numbers of systems, and now – as we’re seeing in the automotive sector – that might even extend to large numbers of high performing, digitally enabled systems, what do you sort of see as the logical choice for the United States when we’re thinking about how are we going to maintain a competitive edge? Because there’s no room for complacency anymore, right? Like, we could blow this, I think is a very reasonable interpretation of events. We’ve got to do the right things to win. So how do you see that playing out?
Hon. Roper: Well, we need greater leadership on generative AI. And what a turnaround from the early days of Maven, where we were behind on facial recognition advantage and we wondered how in the world will we ever match this, how big of a deal is it? Fast forward to today and we’ve got capabilities coming out of U.S. companies that are changing the world, and the chance to drive the standards. So that’s thing one. We’re not going to drive that in defense, but, boy, defense can certainly help on safe use cases and adoption that could really help commercialization of the technology.
And then, from a hardware perspective, we’ve got to kill the middle of the bathtub, where it’s defined in terms of cost. I think really low-cost systems that you can make a lot of, where you’ve got, you know, high-rate, low-cost and fast-cycle times, those are going to be really appealing in future warfare because you can put software on them that you develop once, but you can amortize it across all the systems. That’ll include networking and swarming capabilities, algorithms. That will make a lot of sense for warfare.
And on the other end of the system, I mean, we do have a stealth advantage. Let’s keep it, right? It doesn’t make sense to throw that out.
In the middle is what we’ll hate. That’s the ugly part of the bathtub. It’s not exquisite enough to be interesting and it’s not low enough in cost to be expendable. Now, if you put the two ends of the bathtub together, oh, my goodness, that seems great. Rather than just hide, which we do with stealth, now I’m creating this amazingly robust clutter with my low-cost systems. And now any one of those things on your radar, if you’re an opponent, it may not be a low-cost system. It may be one of my exquisite things just about to pull trigger.
So I love that future. But what I think will end up being the Achilles heel is that it’ll be easy for both sides to move to the middle of the bathtub, where the exquisite thing is not so good as to be a game changer, or the low-cost thing is too expensive to be attritable. And so that would be an area I’d watch.
Mr. Allen: You know, I love that, because going to the middle is often political compromise, which, in a bureaucracy, where bureaucratic politics are intense, or in a Congress where politics are intense, you know, compromise is often attractive. But if you have to choose between, you know, building your house on one side of a street or building it on the other side of the street, the worst thing you can do is compromise and build your house in the middle of the street.
And so there are some times where really you have to acknowledge the tradeoff and make the bets on the sides of the equation that actually make sense, which is hard in a political system.
Hon. Roper: It is. I think that’s – you know, that’s why the bathtub’s – you want to see a whole lot here, a whole lot here, and then very little in the middle. I think, for the low-cost things, if you don’t – if you can’t get the focus to design the factory, the process first, it’s not – you’re not going to magically get cost – it hasn’t happened in any program.
So we need to do what Elon did with gigafactories and bring in a lot of automation, turnkey manufacturing approaches, really look at supply chain before we finalize the design to make sure that we’ve got a robust global supply chain, and that means we’re going to give up performance. I can intentionally give up performance to have a factory that makes sense, a process that makes sense.
Now, on the other side of the equation, those are going to be the unique military breakthroughs that we’ve made that we think could be an enduring advantage, maybe a hardware advantage. And they’re going to cost us a lot, because we won’t be able to have a lot of them due to industrial-base size. They’re probably going to be classified, so we’ll only have a handful of companies working on them.
But the fact that we are working on them and could make one of those breakthroughs makes every blip on the radar screen even scarier, because you know we’ve got the secretive things we’re bringing. And then the things that aren’t secretive, the drones, the secret sauce is the software. It’s the collaboration. It’s the AI. And you don’t understand that either. And these two things can work together. Figure it out, right?
It's going to put the OODA loop on steroids. It’s going to be everywhere. It’s going to be tighter than humans. And it’s not going to be networked in a single platform anymore, where a person’s looking at the radar screen, determining is it a threat, and deciding to pull the trigger. It’s going to be happening in a decentralized way. And whether we have accountability over those lethal decisions, that’s going to be scrutinized.
There’s a lot of fundamental work to be done. And the time to talk about it is over. China has the same ideas. They have focus. They have patience. We don’t. So what we’ve got to bring is urgency and the sustainable forces of markets, which is our strength.
The one thing that China can’t point to is generation after generation of picking winners. But we can point to our markets, our capital economies, venture capital, tech ecosystems and their ability to grow great companies. We can point to generation after generation of great companies that have changed the world. That’s the strength we should play to.
And for the low-cost attritable software-enabled systems, the next generation of company, they may be growing right now, coming in. And if the Pentagon has the wherewithal to encourage it, then who knows? We may reset the tables one more time with China and get another generation of deterrence.
Mr. Allen: Well, Dr. Will Roper, I can’t emphasize enough what a pleasure it has been for me, as somebody who has admired your career for such a long time and seeing how you’ve played out so many different – I mean, it’s not that long, but eras of DOD AI and autonomy. And what’s kind of remarkable is you’re still so young. So you’re going to play a big part in the future as well for a long while to come.
Hon. Roper: Oh, thanks. I really appreciate the hour. And what great graphics of just a synthesis of all this. So I’m glad that you are researching and studying. Please push the U.S. government on these ideas. We need organizations like CSIS that think independently, that are there, administration after administration. And if I got anything done in government it’s because I had really great people on the team, and we had the right ideas, and we had the energy to take risk in the bureaucracy. So I wish everyone serving today the best of luck and the greatest of speed, and for anyone serving in future the same.
Mr. Allen: Thank you so much. This concludes our event on “The Past, Present, and Future of DOD AI and Autonomy.” Thanks so much for watching and listening. And for more of our analysis on all of these issues, go to CSIS.org.
(END.)