AI and Advanced Technologies in the Fight: Combatant Command and Service Collaboration
Available Downloads
This transcript is from a CSIS event hosted on September 13, 2024. Watch the full video here.
Gregory C. Allen: Good afternoon. I’m Gregory C. Allen, the director of the Wadhwani Center for AI and Advanced Technologies here at the Center for Strategic and International Studies.
This afternoon, we’ve got an extraordinary event that gets to the heart of how AI and advanced technologies actually – really, where the rubber meets the road in the Department of Defense. And that is around the collaboration that takes place between combatant commands, which have the operational responsibilities of actually fighting wars, and the services, which have the responsibility of manning, training, and equipping to provide the capabilities that they need, as well as supporting the sustainment of those capabilities.
And in order to have that conversation, we have three of the best possible people you could have across the entire Department of Defense. So to my immediate left, we have Schuyler Moore, who is the chief technology officer of U.S. Central Command, or CENTCOM, which has been an absolute leader in many of these adoption initiatives. Then we have Justin Fanelli, who serves as the chief technology officer of the Department of the Navy, and also the technical director of the Program Executive Office for Digital and Enterprise Services. And then we have Dr. Alex Miller, who is the chief technology officer for the chief of staff of the Army. Thank you so much for all being here today.
Before we launch into the conversation overall, I kind of want to focus on this fact that we have now three chief technology officers. If you were to travel back in time to the Department of Defense in the 1990s, I’m not sure you would find a single individual who had the title of chief technology officer, although perhaps I’m wrong. So it is a relatively new position, although it’s, of course, a familiar one in the commercial technological industry. So I’d like to understand a little bit about what your role is in the organizations, and the fact of the existence of the chief technology officer, what sort of problems are you working on solving? So if we could start with you, Sky.
Schuyler Moore: Absolutely. So I’ve been in the role of CTO for almost exactly two years now. And it was the first role – the first time that CENTCOM has had a CTO. And I’ve shaped it in a particular way, but I think it’s worthwhile noting that CTO roles across different organizations in the department can often take really different forms, because CTO in government tends to mean something that’s a little bit different from industry. Because in industry you’re typically looking at one particular technology and you’re expected to have depth there and execution of dragging that particular technology forward. For us, for each of these individuals, I could list dozens of technologies that we are partnering on, that we are working on efforts to advance. But it means that the role takes a little bit more of an amorphous form depending on what the command needs.
So for CENTCOM, the way that I’ve interpreted it and the way that I execute the CTO role is three parts. Advising the commander, making sure that he has the information to make the best decisions about what types of technologies to bring forward, what technologies to at least start validating and integrating into our experimentation, where investments might need to sit.
I also serve the role of facilitation. There are already a ton of teams doing good work across CENTCOM. It does not need to be that only new efforts are coming out of my office. In many ways, I am best equipped and best positioned to put jetpacks on people who are already doing awesome work. And so facilitation plays a huge role in what I do.
Mr. Allen: And in most cases it’s not an actual jetpack, although we have that technology.
Ms. Moore: Not – without a doubt it’s not, as much as I would love it to be.
Mr. Allen: Yeah.
Ms. Moore: But the last one is pathfinding. So, whether they’re new technologies, whether they’re new processes, different teams that we’re working with, figuring out the early friction points of how to work with those and lifting that burden from the rest of the command is also an important role that we play. And then ultimately transitioning; transition is always the primary goal. I am not the user. My team is not the user. We are always aiming to transition something into an operational-use case and to the user who would own it.
Mr. Allen: Amazing. Let’s get the Navy perspective.
Justin Fanelli: Sure. So within the Department of Navy the CTO reports to the CIO. And so we recently just put out the Information Superiority Vision 2.0. This is how we use data to improve every aspect of operations within the Department of Navy, Navy and Marine Corps.
The tech-director hat is an interesting one. So I’ve shared these hats for about 16 months now; and the CTO role, more strategy, the Program Executive Office, more execution. And so we have a program that we put out about two years ago called Strategy through Execution. And so, to Sky’s point, how can we allow leapfrogs to come ahead faster from an emerging-tech perspective, from a doing-things-differently perspective, and how can we eliminate friction? So from an optimize and from an allow perspective.
A lot of that great work is happening at the edges. We talk to Marines in the field. We talk to Navy fleet users on a regular basis. And if they’re doing something awesome or if they have a sticky problem, we want to use that to amplify what we’re working on, connect dots, find doers, and allow more learning by doing to happen and promulgate into acquisitions that we can get those scaled solutions out soonest and change the way that we’re operating where necessary.
Mr. Allen: Fabulous.
And Dr. Miller, the Army.
Dr. Alex Miller: Thank you.
So, again, I am the chief technology officer for the chief of staff of the Army, General George. And it is an emergent position. And by that I mean the chief said I need somebody who can advocate for our users. And really that’s soldiers. Get out there and make sure that the front of the force has a voice. Identify friction in the process. And I mean the capital-p process for how do we require, how do we acquire, and then how do we provide feedback. And then, third, work across the Army in its entirety.
So generally, technologists are bound to some part of it, whether it’s a lab or part of the acquisition corps, or even in units themselves, but really having somebody who can walk from the chief’s office into the secretariat and go here’s what we’re thinking; I want to make sure that everybody’s involved in the process.
Unfortunately, that is not the norm. So being able to work with my partners at both the secretary and undersecretariat, being able to go directly to our acquisition leads and say here’s what we’re thinking, here’s what we’re seeing, here’s what our servicemembers, either downrange in CENTCOM or otherwise who are decisively engaged, don’t have the time or bandwidth to provide feedback on. So it’s really unfortunately less about the technology itself and more being a facilitator for how we get that technology out there for us.
Mr. Allen: Great. And now, of course, you’re in the CIO organization.
Sky, you were previously the chief strategy officer of Task Force 59 in CENTCOM, which was really about accelerated option of data-driven technologies and AI-driven technologies. And a lot of the stuff that you mentioned was software-enabled as well. So is it fair to say that sort of digital-data, artificial-intelligence initiatives, are really a priority for all three of your roles as chief technology officers? Because I’m thinking about the fact that the undersecretary of defense for research and engineering, the website for that organization is CTO.mil. But if you go there, you’ll find everything from material science to electronic warfare to rocketry, as well as sort of digital technologies.
But in your positions, as you interpret it, is it as broad as that portfolio, or is it really more focused around digital-enabled technologies?
Ms. Moore: Ours is more narrow. And the definition – so our comparison internally is that we have a J-8 that’s focused on resourcing, but also science and technology. And the distinction that we’ve made between our portfolios is that mine is very specifically looking at close-in opportunities, really inside of 12-month execution. If it’s something exceptional, we’ll wait 24 (months). But that 12-month execution to us is an important piece to address for combatant command, because we do have problems that arise inside of what a time horizon of normal engineering development and then acquisition and sustainment can be. So that tends towards the direction of software tools because, in a lot of cases, that is what can be delivered inside of 12 months, but not always.
The example that you gave of 59 is a really good example of hardware, where we have a lot of unmanned surface vessels out at the fleet right now that are being tested or used in operations. And it’s because we identified that there were commercial solutions that could be applied to a problem set we had right now. And that wasn’t a, again, we need to wait five years for it to mature, we need to wait 10 years. Right now, it can solve a problem for us. So that fell into the bucket of near-term opportunity for technology that happened to be hardware flavored.
Mr. Allen: Really interesting. Anything either of you would add?
Mr. Fanelli: Yeah. I would say the nature of the federal government and the nature of DOD has gone more digital. So that’s certainly a focus here.
Mr. Allen: Well, it’s trying. (Laughs.)
Mr. Fanelli: The nature and intention of both are trending in that direction
Mr. Allen: Yes.
Mr. Fanelli: And so we, I think all three of us, long to be and long to connect with folks who are enablers of that, and expediting that. And so, to Sky’s point, I think in generally three horizons. One is from an operations perspective. If there’s anything that needs fixed, this is the activity. This is execution. This is the tyranny of the day. There are always room for improvement there, but sometimes the window of opportunity opens greater than others. And I would say that this is one of those times.
So we have a second horizon for piloting, and then a third horizon to work on some of those external activities or connect with external partners, like R&E, from a material advancement perspective. And, like other critical technologies that probably aren’t ready, we want to be plugged in earlier. We’re taking active steps to make sure that they understand our intentions. And then we’re kind of, like, matriculating that down the field, while we’re sprinting on our piloting activities so that we can get that into horizon one or production as quickly as possible.
Dr. Miller: The Army is a big place and their portfolio is very broad. So, yes, I focus a lot on information technology. And I focus a lot on data-driven technologies. But we’re really focused on UAS, unmanned systems, counter-unmanned systems, electromagnetic warfare, command and control, additive and advanced manufacturing, and working across the breadth of those things. I would love to very much focus on one thing, but we don’t have that luxury. I don’t have that luxury. Therefore, the Army also has things like Army Futures Command, which is look – which are looking at epochs five, ten years down the road.
We also have – I have a great partner, Chris Manning. He is the deputy assistant secretary of the Army for research and technology. And within our acquisition lead, he owns sort of the S&T, the science and technology, portfolio, looking at maybe things that are 10 or 15 years down the road. Fundamental new sciences, application of new sciences to existing and new problems. So I don’t – I don’t go across everything, but the portfolio is much broader than just the ones and zeros, even though most of the things are driven by those ones and zeros.
Mr. Allen: Yeah, that really helpfully clarifies things. Now, you know, the CTO organizations are new, but the problem set of how do you get combatant commands and services to effectively work together, that’s an old problem. And so, before we get into some of the specific initiatives that you all have been collaborating on to sort of make this happen, I want to talk about sort of, like, what is your vision for the relationship between services and combatant commands in these, sort of, you know, new technology-enabled, technology-driven domains? Especially when you’re thinking about rapid time frames.
Dr. Miller, if we keep going with you.
Dr. Miller: Oh yeah. No, that’s a super easy problem. (Laughter.) No, one of the – one of the most interesting things that I saw when I was in Afghanistan, and I think my partners here would share that, is we did things a lot faster deployed. We always did, because the – we were closer to the problem, we could explain the problem much more rapidly, and then people were much more driven to solve and help us because we were – we were in the fight. Our relationship with the COCOMs, I think, is on two different trends.
One, it’s, how do we take the lessons observed in the COCOMs and turn those into lessons learned? And what I mean is, how do you change, whether it is changing your doctrine or changing your formations or changing your technology or just doing a little bit of process innovation, to create some battlefield mischief? The other side is from a command and control – from a how we do war fighting. Our relationships with the COCOMs are changing because fundamentally the way that the COCOMs do business is changing. And I know Sky will talk about some of the awesome work she’s done with Digital Falcon and Digital Falcon Oasis.
But what we are trying to think about as services is, how do we plug into a COCOM commander’s decision-making cycle, and then enable it at the most tactical level, all the way up through General Kurilla’s level, or Admiral Paparo’s level, in terms of they have to make a decision. That decision is going to be either commit or not commit forces on the ground. How do we move that decision all the way down to the soldier, the Marine, the Guardian, the sailor, or the Coastie, or the airman who might be out there, like, to execute that command?
Mr. Allen: Wow. Anything like that? I know you could talk about this topic for quite some time.
Ms. Moore: Yeah. I mean, we’ve really appreciated the partnership that we’ve had with the services because I think that there may be two perspectives that we’re each bringing that uniquely complement one another. And so for us as combatant commands across the board we have two unique features that are really important for tech evaluation and validation, which are real users and realistic environments. If you are not integrating both of those things into your experimentation, you are likely to miss the mark and create capabilities that are not actually creating impact for the users. And so we bring that to the table.
Another way of flipping that on its head and maybe describing it differently from a service perspective is that we can offer very realistic market research for whatever you might be trying to integrate. And so – and we can talk a little bit later about some of the examples that we’ve done with the services that allow them to essentially use us to sharpen the sword of whatever you may be trying to buy in the long term, because it’s everything from does the technology work in the way that it is promised, but also what are the contractual structures and wording that needs to go into place to make this sustainable over time and make sure that it talks to all of the other systems that we have to use as a combatant command. There are really important context clues that we can bring to the services, and in exchange the services can help us articulate our needs in the long run, you know, saying, yes, you are focused on near-term operations, but if you frame this differently or if you help us understand how this gets sustained over a longer term we can each help one another. And I think we’ve gotten much better at that in recent years.
Dr. Miller: Can I – can I –
Mr. Allen: Oh, please.
Dr. Miller: Go ahead.
Mr. Fanelli: Oh, just a quick one on this. We fight joint. We experiment joint. And there have been times in the past where more often than not we deliver waterfall. And it is a known known that agile allows us to learn faster, it allows us to connect and tighten the feedback loop, which results in better product. To me, there are – we fight about definitions of agile. It means together. And –
Mr. Allen: This is the agile project management approach.
Mr. Fanelli: Agile project management, agile for software, and agile in paired working, right? And so where we’re paired with a combatant command and we are closer to a problem – not an email away; shoulder-width away, an experiment away – Sky does an outstanding job of we are one deadline away, and you’re never without a deadline – that expedites the learning by doing for us to accumulate lessons-learned revisions. It allows all of us to learn faster together. We have at times allowed the process to drive us. We now have an opportunity to drive the process by some of these accesses and some of this working together, and that is being rolled back into ways that we acquire and ways that we operate.
Dr. Miller: I love this conversation because the fundamental difference with how we’re trying to do business today versus 10 years ago is we’re doing it live. What I mean by that is Sky said market research and Justin said agile. For us, for the Army, I’ll give two very finite examples.
For CENTCOM, there’s one thing that we all realized in getting deployed that we don’t talk about in labs, we don’t talk about during technical downselects, we don’t talk about during any of the acquisition process: there’s a lot of sand – (laughter) – and there’s a lot of wind, and that sand moves. And when you start having moving components and there’s sand in there, it acts a lot differently than when you’re in a lab. And when you have optics, they don’t react well when light is bouncing off of moisture and sand.
Mr. Allen: Yeah. Just one example that we encountered frequently as a problem when I was in the Department of Defense, that we had a lot of folks who were talking about this problem, is when sand gets in a helicopter engine it’s hot enough to melt that sand into glass.
Dr. Miller: Yes.
Mr. Allen: And then when you turn the engine off, suddenly there’s glass that has accumulated in there.
Dr. Miller: There’s glass.
Mr. Allen: So it’s – you encounter a lot of problems only in the real world, only from the perspective of the operator. There’s also the adversary, of course, you know, who’s not going to hold constant even though that would be convenient.
Dr. Miller: Absolutely. So having that not only market research, but real-world perspective coming back from the COCOM to, you know, our theater armies, the theater fleets, is real.
And another one, it’s not just – it’s not just the weird environmentals that you can see and touch; I’ll give you another finite example from the Pacific. When you put a lot of computers into a human environment and you turn them off, condensation happens. And we don’t plan for these things because it’s really hard to replicate that unless you are in the environment. So not only that market research, but that realistic ability to really get into the fight, into the theater, and that’s – that will lead into what the Army’s doing in transforming and contact. But doing it live pays more dividends than lab-based risk reductions ever will.
Mr. Allen: And you’re going to learn that one way or the other. You’re just going to learn it in the first month instead of the 30th month, or when it is an exercise or a learning activity as opposed to in full operations.
Dr. Miller: So we’re pulling the learning to the left through the connective tissue, through these intersections.
Mr. Allen: That’s amazing. So one of the things that I know is going to be a constant throughout this conversation is integrating the insights that you get from the user, integrating the insights that you get from operating, but then of course the challenge of how do you make that a part of the sustainment process. I’m sure this is going to keep coming up throughout our conversation.
But I know you have a lot of really exciting initiatives that are underway, and I want to give you the chance to talk about what’s going on. Sky, CENTCOM operates in complex, rapidly-changing environments, and you’ve now undertaken a series of exercises – or maybe experiments is sometimes the right word – designed to quickly test and then field the technologies using like AI, and one of these is Digital Falcon Oasis, which I think is mostly about command and control, C2. But tell us what is Digital Falcon Oasis and sort of where you are in the story.
Ms. Moore: Absolutely. So Digital Falcon Oasis is our digital exercise series, and it runs 90 days, and it’s based on a very specific and simple premise, which is the best way to test software tools is to give them to the user and get them feedback as much as – as quickly as humanly possible. And it’s a really interesting and blunt experience for us because, especially I think in the early stages of the experiment, maybe a year-and-a-half, two years ago, we were really just trying to get the muscle memory of how you sprinted and how you communicated between users and engineers, and they were looking at each other like they completely spoke different languages. And so we were trying to do a lot of translation.
But over time, that literacy that that familiarity with the process increased, and it increasing got to the point where they understood the game. So you will roll into an exercise – our next one is going to be in October – and they will sit down. They understand the experience or you’re supposed to bump around with this software tool, figure out where it breaks, figure out where it works, and then you give that feedback. And the most fantastic experience is for us is to sit down at the end of the day with one of these types of exercises and sit with someone in here and “that was the worst software tool I’ve ever used in my life,” and be like, what beautiful, direct feedback. Thank God – to Justin’s point – I’m hearing this now instead of five years down the road when we overinvested and went down a pathway that was not useful.
Flip side is it’s ever more exciting when you are working with a tool that a year ago they gave that feedback and said, “god, I hate this tool so much,” and you’ve iterated, and you’ve worked with them every 90 days since, and they get to a point where they say, “Ah, this saved me a lot of time.” “This helped me do my job.” This made me safer.” And if you can get them to go yes on one of those three questions – Did it save you time? Did it make you safer? Did it make you more effective at your mission? – you have done your job. We are not finished as a technology community until we get that answer from one of the user community. It is not enough to check the box about getting data into one place, or having software tools or a development environment. You are not done until a tool gets into the hand of a user, and they say, “I swear this helped me do my job.”
Dr. Miller: So there’s obviously the problem of, you know, how do you get software to the users, how do you update it quickly enough, but there’s also this sort of operational mission that you are trying to solve a problem for them, that the software is moving the needle on.
So what are sort of the specific tools and capabilities that you are trying to roll out, or trying to develop, or mature, you know, through Digital Falcon Oasis?
Ms. Moore: So we think these days people tend to refer to it as CJADC2 combined joint all-domain command and control, the ultimate DOD Scrabble word. But at the very beginning of this, we chose a very specific and discrete workflow, which was targeting. For us it was something that was very specifically bound, and you could determine who the users were and so go to them and say, what is your normal workflow. You could build a tool around that workflow, and then you could iterate over and over again.
And then the reality of our command is that we are very active, and we had the opportunity to test it live multiple times, run it parallel, run the tools parallel to the traditional process until the user said, no, we’re ready. This tool is ready for prime time, and we were able to transition over.
When you think –
Dr. Miller: Oh, that’s interesting. So with the actual user community –
Ms. Moore: Yes.
Dr. Miller: As the user community is engaged in real operations, you’ve got sort of a subset that running the testing stuff. So this is not necessarily connected to the go-boom stuff, but all the variables in the equation are identical to as it was. So that’s a very high-fidelity test you are running there.
Ms. Moore: That’s exactly right. So early on, again, we were running strikes for the last two years, and you would have your traditional process that is primarily based on PDF, emailing and phone call, and as you can imagine, we just said there’s got to be a better way. If you were to put the data in a common place that is intuitive to the user to be able to get, that is going to result in better outcomes for the command.
And so we were able to sidle that along and break it into distinct parts. We don’t want to even abstract targeting, like you start with a question of there is something that I believe is a point of interest that may pose a threat to us because it’s shooting into the Red Sea. You then move to the stage of target development requires a number of very complex processes and approvals to say, yes, this is a valid target that the department should then be looking at and potentially act on. There is the nomination approval process that then integrates even more organizations that are then looking at that same information. And, again, with every stage of this you’re imagining more organizations geographically dispersed in other locations where they’re going to have to be able to access that data in a timely way. And digital tools, and these types of workflow tools, are what really facilitate it.
So we started with something very narrow, and then it ultimately started leading us to logical follow-on questions of, if I know how to do targeting at speed the next question is do I have the munitions and the supplies in place to be able to do this as well? And so it opened up a whole new workflow about logistics and sustainment. And then it opens up the follow-on question of, for our planning community, once they were thinking about target execution how do they fit into these tools? How do their existing tools fit? And so that really helped us walk our way through what otherwise could have seemed like a boiling the ocean experiment of saying we must build CJADC2 tools. We never started with that vision. We started very specific, and then over time, I think, found ourselves in a place where we realized we’re using them every day.
Mr. Allen: So you’re, like, growing and evolving to CJADC2, as opposed to waterfall defining, you know, CJADC2. That’s really interesting. I also think your experience of, sort of, once you digitize one thing and you like it, you quickly find, well, wouldn’t it be nice if we started digitizing just about everything, so that it can talk to these other systems?
So this obviously brings in the services, who obviously the voice of the user community is critical. But they don’t just want this for a month. They’re going to want this for the next twenty years, which comes up, you know, to questions like sustainment, and also questions like PPBE and the budgeting process for who’s going to pay for this and under what time frame. So how exactly are the Army and the Navy involved in Digital Falcon Oasis and collaborating with CENTCOM?
Dr. Miller: I feel like she’s going to take notes. (Laughter.) OK, so the for –
Mr. Allen: Yeah, Sky’s going to ask for you to discuss the budget live here, and make some real promises. (Laughter.)
Dr. Miller: No problem, as long as there’s a USA Jobs opening somewhere. (Laughter.)
Mr. Fanelli: We said, do it live. (Laughter.)
Dr. Miller: No, so Digital Falcon Oasis, the – so if you just think about CENTCOM, the Army is aligned in a couple of different ways. So we have the Army Service Component Command, Army Central, ARCENT, and then we have a really interesting element called the 513th military intelligence brigade for theater. We have one of those – the MIB(T)s for each theater. But the 513th is CENTCOM’s. And they do all the analytic control and intelligence processing for CENTCOM.
What we tried for, Digital Falcon Oasis one, I think it was, was just making sure that the command and control suite that that General Kurilla uses, and all of his joint directors use within the headquarters, their intelligence apparatus could talk to it and provide that intelligence preparation of the battlefield to CENTCOM headquarters. And then in Digital Falcon Oasis two, what we said is, hey, how do we go downstream from there? So if there is an element that is fighting – and I’ll be a little prideful here, the global response force for America is the 18th Airborne Corps, and the ready for the immediate response force is the 82nd Airborne Division.
Mr. Allen: Those are people who never say no to a hard job.
Dr. Miller: They never say no to anything. So when they come up and they have to deploy somewhere, can they then talk to both their intelligence apparatus and the command and control apparatus from the theater they’re going into – so, like in January of 2020 when we had issues in Iraq and 18th Airborne Corps the 82nd went forward, we couldn’t – it just – it was a hard problem. We saw lots of things going up to that. But now I think I’m confident in saying that the workflows and the threads that Sky talked about, we can say from the division level, from the 82nd, up through their higher headquarters, the corps, and then through the theater service component command, we can now talk. We can say, hey, General Kurilla’s command and control can flow down. That’s one side of it.
The part that we are thinking about now as we move forward into whatever the future looks like is, if you think about Gmail on your computer, and you could only access it through your desktop, it would not be as successful because it wouldn’t scale down. What we’re thinking about is how do you scale it down? How do you scale that command and control from the theater level, so you don’t have a super jock and a nice air-conditioned facility, down to the company commander who is thinking about the next 12 hours, he’s fighting until morning with his boys.
And how do you actually get that C2 all the way down to that element? How do you scale it down? Not up and out, but how do you take that and scale down? So that’s – as we look forward to other exercises, like DFO two, three, four, five, six, and then – again, I’ll be a little bit broader – things like Valiant Shield, and Northern Edge, and Defender Europe, and Northern Strike for the other COCOMs. That is what the Army is thinking about, connecting up to the COCOM and then scale –
Mr. Allen: This is a big series of exercises across all the different COCOMs, yeah.
Dr. Miller: For each theater, yeah. And just being a little parochial, just how do we make sure that we are good teammates to the COCOM so that we aren’t saying, hey, we can’t take your data because we’re broken in some way.
Ms. Moore: But I think – I mean, it’s worth foot stomping, like, the value of that relationship of poking one another in a productive way to figure out where – what is flowing correctly and what is not yet. And so, to the point of we can express a demand signal to the Army and to the Navy, and have at various points, of saying: I need your data. Where is this data? I also need this data on this certain timeline of delivery. And getting that feedback to them that they wouldn’t otherwise know. Why would they know the operational requirement for the timeliness and perhaps the form of the data?
On the flip side, it’s really useful to get feedback from the Army and from the Navy about the way that they need to interact and receive data from us, because it goes in both directions. Everything that we do collectively impacts one another. The data that they are collecting, whether it is for personnel, for logistics, equipment or otherwise, impacts the way that we experience them and the ways that we are able to use them.
The flip side, if we can give them data about how we are using them, the impact it may have on sustainment down the road, that makes all of us better, but it involves a lot of back and forth; so very simple proof of life, of I am sending data. Did you receive it? No. Did you receive it now? No. Did you receive it now? Yes. And just doing that over and over again. And having multiple opportunities every 90 days to try it is very helpful.
Mr. Fanelli: On that reps piece, one of the main parts here, a differentiator, is that the COMs are going horizontal. And so this is the Army talking to CENTCOM. This is the Navy talking to CENTCOM and back and forth. What often happens, it’ll have to go all the way up to the satellite and back down, and that’s slow transmission rate. And so both the COMs, the learning, the feedback, good or bad, is happening side to side.
And so there were cases where we are providing data feeds. And we have a superior data feed, but it’s on the way and it’s nine months away. What can we do about that? Can we pull something forward? We have a series of Horizon 2 pilots. We are prioritizing those based on, hey, not only what might be ready, but what might have the biggest effects. And we are now informed by what’s going on there. And then largely listening and observing, hey, how can this change the way that we work in other theaters. How does this support our activities within INDOPACOM?
So there’s a lot of parallel collateral learning that’s happening that applies both to our execution in other domains and within acquisition to say, oh, that thing that we’re buying or that thing that we’re sustaining, no one’s using that now.
Mr. Allen: Wow, which is probably hard news to hear, right, if you’re in the – (laughs) – program office and in charge of delivering that.
Did you want to say something, Dr. Miller?
Dr. Miller: I do. I love this, because we are three CTOs. And I’ve had very exquisite conversations with Sky about how different AI models work, down to bits and bytes. And I’ve had very exquisite conversation with Justin about communications capabilities. And the fact that we’re not talking about that is probably a proof that we’re doing the right thing, because we’re talking about very simple things. Like Sky said, we’re getting reps on how do we share data. We’re getting reps on how do you share laterally.
And I just want to put a fine point, because what Justin said is 100 percent correct. If two commiserate elements in the different services – so if I pack an Army division and a destroyer and they’re peer echelons, they don’t talk to each other. They talk through someone else, probably at the four-star level. They have to go up to the fleet. They have to go up to the corps. And there’s some level of interchange.
Something we’re trying to work through now is just what should it look like in terms of me being able to talk directly across and down to their elements without having to do that very weird –
Mr. Allen: Without asking your boss to ask his boss –
Dr. Miller: – Cold War hierarchy. Yeah.
Mr. Allen: – for permission to talk to his subordinate. Yes. That’s a broken system. You can’t – you can’t be joint if that’s the way that you’re allowed to talk to each other.
Dr. Miller: That’s exactly it.
Mr. Allen: Now, I want to talk about what success looks like in this series of exercises. So, you know, you talked about moving from a world in which data is shared by emailing PDFs to a world in which, you know, you actually have real-time data sets that are either streaming or API-accessible or whatever it may be.
And what metrics are you looking at when you say, like, have we made it yet? And what’s your sort of, like, long-term theory of success for this series of exercises?
Ms. Moore: The short version is we try to keep it bite-size, because, again, I think that there is risk sometimes of trying to express what success is in terms of standards, where everything will perform in X-Y-Z ways, that ultimately is generally interesting to everyone and specifically useful to no one. And so we really try to, again, keep it specific of there is this workflow and there is a user that is our guide on whether or not that is successfully –
Mr. Allen: And this is your 90-day sprint cycle, right, that – yeah.
Ms. Moore: That’s exactly right. That slowly balloons out so it does start to touch more and more elements at the command. I think, again, when we first started two years ago it was primarily our J3 which is responsible for operations and our joint fires elements specifically that was looking at targeting. But over time, we now have almost every single joint directorate and component involved in these digital exercises every single month. In many cases, they’re actually running their own inside of 90 days because they said: You’re not moving fast enough; we have some other things that we’d like to add in. And so in many ways that process alone means success, that it’s building – getting into the muscle memory, into the bones of the organization, that it thinks in sprints.
But success has to be measured in a very simple and very tactical sense in many ways, which I think is sometimes difficult because we want to have these broad winds of CJADC2 has been executed. But the reality for us it that it’s really small anecdotes like a targeteer who says this used to take me four hours and 50 people to pass the data from this network to the last, and now I can do it with the click of one button. And the relief on their face is the win. Like, that is what qualifies as a win. And again, it feels like this small moment, but then what we can do to make sure that we are getting that win and also sharing it with the rest of the department is by making sure that we are integrating with the Army – with the Army and the Navy and the Air Force and the other services, that we’re participating in larger exercises like the Global Information Dominance – I mean, the CDAO runs, because they can then pass those lessons out to other combatant commands. It may be, not be exactly the same solution in every combatant commands, but in a lot of cases at least the process to find it will somewhat rhyme.
Mr. Fanelli: To that point, when we couch or frame wins in terms of outcomes, it travels faster. It carries better. And so we’re prioritizing based on what we’re seeing here and working together on: We do a lot of activities; are they moving the needle? Turns out most things are parade-o, right? Twenty percent or less is really moving the needle. And so we have a better sense now than we did before, based on how we’re measuring, on this activity that we think is really important; is it moving mission outcomes or not? And so organizing around that specifically on the acquisition side I believe is giving us an opportunity to both be more evolutionary in a meaningful way, but then also revolutionary when an outside idea or an emerging technology storms on the scene.
Some of the features within artificial intelligence that potentially streamline, automate, or even eradicate a need for a(n) existing function, that type of disruption is something that we haven’t always been equipped to take into consideration, and now we can prioritize based on that. And that’s just helping us work and partner together because we have the so-what built in. Not everyone’s good at translation. And so where we can institutionalize this translation piece, then it applies to more pockets faster.
Mr. Allen: Amazing.
So I want to now shift from Digital Falcon Oasis, one series of important exercises, to Desert Guardian. So I think anybody who’s looking at what’s going on in the war in Ukraine, anybody who’s familiar with in the CENTCOM region the missile and one-way drone strike that Iran launched on Israel understands that this problem of defending your skies against UAS systems is a permanent feature of warfare at this point. So what is Desert Guardian trying to do to move the needle on this problem?
Ms. Moore: So we’re really excited about this series in part because of a very specific problem that, as you mentioned, impacts our command significantly; but also, because to us it’s an example of success of partnering with a bunch of different organizations that hold slightly different perspectives on the same problem. For us right now, the most critical point is to improve the capabilities that we have today, right now, to be able to protect our servicemembers and partner nations who are forward. But additionally, it also –
Mr. Allen: And this is – this is – I mean, the capability that is being worked upon here, is it a missile defense system? Is it a(n) early warning system? You know –
Ms. Moore: So I think it might be helpful to take a big step back –
Mr. Allen: Please.
Ms. Moore: – from counter-UAS and just sort of look at the whole scope of the problem because I think sometimes we think about counter-UAS only in terms of the very last section, which is shooting it down. And that is really important piece, certainly; you want to make sure that you have a range of shooters to reach for that could potentially defeat something. But there is all this other lifecycle in front of it where you have opportunities to improve your ability to protect your forces. So everything from sensors and increasing the types, the diversity, the spread of them in order to be able to find things further out, but not just find air tracks further out but identify is it hostile or not.
You can imagine a space – imagine D.C. and what the airspace here looks like. Massive amounts of clutter, whether it is balloons, trash bags, commercial air coming out of DCA, everything that you could imagine. And so being able to sift through all that clutter and fairly quickly say, there is something that is coming in at bearing and speed that appears anomalous. We’ve really got to be able to narrow in on that specifically and defeat it quickly, means that there’s a little bit of sensing, there’s a little bit of correlation and characterization, so software that sits on top of that.
But the really important piece that we thought Desert Guardian really needed to address was the point of integration. So what we mean by that is that, you know, you can throw as many shooters as you want at a team, you can give them as many sensors, but the feedback that we have consistently gotten from our users is that we are actually making the problem worse if we keep giving them more screens. What they meant by –
Mr. Allen: I think that’s worth just saying again. You are making their life worse if you just keep giving them more screens. And in, you know, a historical operations center, it’s totally routine for one program of record-delivered system to result in one screen. And so you might have an operation center that is just, like, and this thing talks to AFATDs, and this thing talks to Patriot, and this thing talks to –
Dr. Miller: There are – there are Navy systems too. (Laughter.)
Mr. Allen: And, you know, that can make life hard for the folks who are in charge of understanding what is happening across all these sort of different data streams. So I just wanted to sort of complain about the way, you know, things can look when they go wrong. Please continue.
Ms. Moore: To be clear, all of the services share this problem, in a beautiful –
Mr. Allen: They really like screens. (Laughter.)
Ms. Moore: Right, exactly.
Mr. Fanelli: More sharing. More sharing. (Laughter.)
Ms. Moore: But, I mean, you can imagine – imagine a space, quite literally, you know, a base defense operations center can, at times, be around the space that we’re sitting in right now. And every couple of feet, you’ve got a different screen that is displaying information from a different sensor, or is the screen from which you are able to shoot something.
Mr. Allen: Looks like a 1980s video arcade, right?
Ms. Moore: It does, but with the fun added benefit of having just a handful of minutes to respond when something is coming towards you, and so you have to run between the different screens or have people shouting – literally shouting – to one another what they’re seeing on their screens, to be able to have one person contextualize and then make decisions about what to do. And if you have that, again, you can imagine what stress a user feels when they say, you think you’re helping me by giving me this excellent, amazing new sensor or shooter, but you are actually in tactical terms making my life worse.
Mr. Allen: So this gets to your point about integration really being a key goal of this series of exercises.
Ms. Moore: Without question. And so the way that we’ve structured the exercise is that we will be bringing in new sensors. We’re going to start with sensor integration. And say, you’re being tested on your ability to detect UAS, but the actual primary metric that you’re being tested on is whether you can pass data to a screen that is not yours. That is whether or not you are successful. Prove to us two things: Sensors, you need to prove to us that you can explain how you send your data, that you can do it, that you can explain the message format that it’s going across. And then for the third-party screen, you have to be a good arbiter and you have to have well-documented application programming interfaces or APIs. You have to communicate as well.
Mr. Allen: Yeah. So this – I mean, I think for some folks who are watching, they’re going to think this is useless, geek speak or nerd speak. But I do think this is worth harping on, right? The data structures, the message formats, the application programming interfaces, APIs, you know, this is, like, I don’t know, the internal plumbing of any piece of software system actually working at scale. When you want to go from a bunch of people in a room shouting at what they’re looking on a screen to, you know, it’s handling millions of messages per minute, you know, if needed, between a bunch of different systems.
And so the fact that you’re actually requiring contractors, we want you to come work with us. We’re excited about your fancy sensor. But your ticket to the show – (laughs) – is that you have great documentation that tells other people how to work with you, I think is a really exciting, you know, paradigm.
Ms. Moore: And then, I mean, when it pivots then to the services and the other organizations – we’re also partnered with CDAO and very aggressively with the Army on this – is that that documentation is not CENTCOM-specific. It may absolutely be that you need different sensors and you need different shooters depending on the threat that you face, depending on the theater that you are in. But the requirement to have them send data to one another is not unique to CENTCOM. And if we can offer a playbook that others can pick up and say, this is how you write your contracts, this is the – these are the questions that you ask your vendors to force them all to play ball, to be able to get all of your screens to look a way that helps a user execute their mission, that is where we can all benefit.
And it’s been fantastic partnering with the Army because they really have viewed this, again, as that realistic in-contact market research, of we’re going to run Desert Guardians side by side with them and share whatever lessons learned we have. We’ll go back and forth with the difference between contracts, and how contracts have been worded before and after, where this might be able to evolve, to maximize flexibility going forward. We’re not going to come out of this pretending we know exactly the sensors or shooters or interfaces that you need. But we do need to have documentation down.
Mr. Allen: So, you know, when I hear Sky talking about something that is learned in one combatant command that is quickly transmitted to all combat commands, my immediate assumption is it’s a hoax. So I’d love to hear from the services, is this really working? I mean, are you finding that you’re able to take the lessons from Desert Guardian in contracting standards and data standards? Is it really flowing back to other programs of record, other combatant commands? Please.
Ms. Moore: I’ll stand in a corner, so you guys can – (laughter) –
Mr. Allen: Yeah, get earmuffs.
Dr. Miller: Before I answer your question, I did not appreciate this until I left the Intelligence Corps and came and started working for the chief, and sort of diversified my horizons. The reality is, several times when I was in Afghanistan you would hear, brr, and that was the C-RAM going off. And then you would hear a pop. And then about two seconds later you’d hear: incoming, incoming, incoming. (Laughter.)
Mr. Allen: As in, it’s already been shot at. It’s already blown up. And now we need to tell you about it.
Dr. Miller: Yeah. The reality of time cannot be overstated here. So Sky mentioned, and I just want to double down on it, you’re talking about things, and I mean more than one, traveling at 100 knots that are coming directly at you, at altitudes that are very low, very hard to see. It’s not like you can point at the sky. The enemy is using terrain to its advantage, buildings to their advantage, the civilian population to their advantage. And that cannot be overstated.
The second one is, I just want to give a shoutout. Our air defense artillery soldiers are – they’re superheroes. The dwell time for our fourteen series, our ADA soldiers, is incredibly low. Dwell time means normally you would have a year deployed and then you have a year at home, a one to one. They’re doing a year abroad and then six to seven months at home before deploying again. So they are superheroes.
Mr. Allen: And that’s just because they’re so in demand right now.
Dr. Miller: Because they’re so in demand. And the third piece is the data. Normally when we think about data, we think about really well-structured – like a tweet. A tweet has a really well-structured header, and the body, and the timestamps, and everything. When we talk about –
Mr. Allen: And that’s about – I mean, just for folks who aren’t, you know, used to thinking in these terms. That’s why you can query the Twitter database of 100 billion messages, because those data structures are extremely well defined, extremely documented, and you can talk to the machine at scale. And what we have is emailed PDFs. And we’re lucky in the DOD if we can use the same memo template. But I – yeah, sorry. Continue. I’ll stop. (Laughter.)
Dr. Miller: And then – but then when you get to, like – no, that’s perfect. But when you get into sort of sophisticated sensors, when you talk about radar data, you’re talking about Doppler shift, time, range, vector, speed. Those things, and then all the –
Mr. Allen: Those are tough physics problems.
Dr. Miller: They are. And then they’re codified in a string of hexadecimal, so not even binary, that has to be translated. And that’s if we get the raw data from the sensor, because proprietary data feeds are a problem. We have locked ourselves in in really bad ways.
Mr. Allen: So this is, like, one company, they’ve solved, you know, the physics problems we were just talking about. They’ve got a data structure for how to interpret that information. And, no, you’re not allowed to know how it works or how to talk to it, if you’re some other company.
Dr. Miller: Exactly. It’s a black box. So what Desert Guardian is helping us with – and I’ll talk about how it translates in a second – is, one, bringing all those vendors back to the table and going, hey, the way of the world is changing, we are not going to do one-for-one sensor to decider to effector. We just aren’t. We need many sensors to many deciders, which you might whittle down to many effectors. And that could be a kinetic effect, or something that blows up. That could be a non-kinetic effector, something that fries a circuitry – whether it’s directed energy or electromagnetic warfare.
But that many to many to many relationship is the key here, which means we go back and we say, no longer – if you don’t want to share your data interfaces, if you don’t want to share your documentation, that’s fine. Thank you for your interest in national security. There’s the door. And we have to be very stringent about that. That’s the left side. The right side is we also are going to take the lessons learned from this and help us write better escape clauses for contracts. So if a vendor stops being a good partner, we stop being their partner.
And I don’t want to sound calcified against industry. Industry –
Mr. Allen: Because we’re trying to bring in industry. There’s all this stuff going on, yeah.
Dr. Miller: We are. And the organic industrial base doesn’t win wars. The organic and defense industrial base wins wars. So it’s not an us versus them conversation. It’s, hey, we want to be a better partner. We would like you to be good partners as well. How do we get there?
In terms of transition, another shoutout. PEO Missiles and Space are doing phenomenal jobs. They own all of the integrated air and missile – integrated air and missile defense system. So everything from your Avenger, which is a Humvee with four rocket pods on it, to the direct energy short range air and missile defense, to all the Coyote missiles which are saving lives in Iraq and CENTCOM, they own that. And they are working on: How do you – how do they recompete in a meaningful way the command-and control systems for that? How do they make sure that sensor integration, and I – works and happens? And I do believe and I can say I know that they’re going to take the lessons learned from Desert Guardian and integrate that back into the Army so that we are a learning organization.
Mr. Allen: So, you know, you’ve got this urgent problem, which is, you know, one-way UASes and sort of the broader problem of how you protect your skies. And when you have an urgent problem, you can get some rapid money to kind of address that problem. And what I – what I really like about what I’m hearing –
Dr. Miller: Can happen? (Laughter.)
Mr. Allen: Oh, maybe you can’t get money. (Laughter.) I was just going to say, you know, what you’re sort of telling industry is: Hey, what we really want is systems that talk to each other. So if you want access to the rapid money, the price of admission is an open architecture that we can work with and that other folks can work with.
Mr. Fanelli: And we actually are able to now make that more of a funnel than a back and forth, right?
Mr. Allen: What do you mean by that?
Mr. Fanelli: So what I mean by that is we are more open, based on the lessons learned – my favorite way to learn lessons is through teammates and others, right? We don’t have to learn them all for ourselves; the idea of a top-level requirement or opening the aperture in an open-minded way to what are the different ways that we can solve this problem. We are doing more top-level requirements. So OPNAV N2/N6, based on learning that’s happening in a number of different places, has said instead of the 300-page requirements document that narrows the competition down, it’s 10 pages and it allows more competitors to come in and say, hey, we think we can solve this but it’s different from how we’ve always solved it, or it’s different from how you’re thinking about the problem. Well, that might change the tactics, techniques, and procedures. That’s OK. We can work through tech-informed concept of employment to do things differently.
And so to the point of this conversation, if you go from the decision backwards and learn from what’s happening, the integration is going to happen no matter what; it’s just either the wetware or the software underneath. And so something that’s commendable through these activities is they’re always panning out to say, hey, we’re going to solve a problem locally, but is there a way to abstract this and make this reusable on a regular basis. And so what we’ve taken from that is we’ve said there are designated enterprise services that just solve from a modular perspective a problem for multiple groups; let’s designate that.
And so we have recently within Department of the Navy learned from what’s happening to say we need more designated enterprise services – naval identity services that connects jointly, the idea of using Jupiter and Advana for many more analytics and business intelligence cases so that we can share data through that federated feed. That opens it up. There are, as a result, more collisions, more opportunities that populate from that based on what they’re learning that we can roll back into our acquisition and strategy decisions.
Mr. Allen: Yeah. And I mean, you mentioned the difference between a 10-page – a 10-page contract document versus a 300-page contract document. But now, you know, when you – when you do have requirements, it’s a minimal set of requirements, it’s sort of the right set of requirements, and they’re focused on interoperability standards, if I understand you all correctly.
- Anything else anybody wants to say about Desert Guardian? Because we got one more that I want to get to. OK.
So let’s talk now about Desert Sentry. Sky, what is Desert Sentry?
Ms. Moore: You’re probably hearing a theme in the naming conventions, which means that we are –
Mr. Allen: Something to do with desert, something to do with dry places.
Mr. Fanelli: Maritime –
Ms. Moore: Yes. (Laughter.) We’re 50 percent agreed we’re just sticking to one of the words.
So Desert Sentry is our AI experimentation. So as a command, we try to be mindful about where and how we engage with AI because, again, it –
Mr. Allen: It’s easy to – I mean, I say this as the director of the Wadhwani AI Center – (laughs) – it’s easy to decide that you want to use AI before you’ve even asked the question is AI a good fit for my problem.
Ms. Moore: That is precisely what I was getting at.
Mr. Fanelli: Yes.
Dr. Miller: Yes.
Mr. Allen: Yes.
Ms. Moore: That is exactly what it was, which is that just because you can doesn’t mean you should with AI.
Mr. Allen: Right, yeah. If you can solve a problem with, you know, hammer, it doesn’t – you don’t need to make it an AI hammer, right? (Laughter.)
Ms. Moore: That is exactly right.
Dr. Miller: Probably could find one of those. (Laughter.)
Mr. Allen: I bet you somebody would sell us one, yeah.
Ms. Moore: It definitely – without question a company –
Mr. Fanelli: Don’t check (his house ?).
Mr. Allen: This is my – this is my new Kickstarter. (Laughter.)
OK. Sorry. Sky, please.
Ms. Moore: Valid. They –
Mr. Allen: Back to your AI initiative.
Ms. Moore: Trying to be really specific about the AI use cases that are appropriate for a combatant command. And so the areas where we have started to carve out, the most obvious one is computer vision, where that’s existed for decades. We’ve always had a general sense that we are collecting so much imagery there has to be a better and more efficient way of sifting through it.
The challenge that we have felt as a combatant command is the distance between model development and user workflows and the realities of how we have to operate day to day. So what I mean by that is that if a model is running but it only gets updated, say, twice a year, or if it’s running but you can’t tell the modeling team I actually need to look for something different – the enemy changed their tactics, techniques and procedures, they’ve started covering things in tarps, they’ve started using different types of vehicles –
Mr. Allen: Yeah. I mean, this is something that my colleague, Kate Bondar, who’s really focused on AI and autonomy in the war in Ukraine, you know, she basically says if those models aren’t changing every week, well, Russian tactics are changing every week. And so if you can’t really get into that kind of update cycle, it’s just not going to work. And that’s not necessarily a problem that, you know, autonomous car drivers do. Stop signs are red this week. They’re going to be red next week, right. But maybe what the adversary is up to in a warfighting domain is not.
So your point about the need for rapid iteration and tightening the relationship between model development and updating and user workflows, I think it’s right. And if you had asked me three years ago, you know, should the combatant command be directly involved in model updates, I would have been like, what do you mean? Combatant commands don’t do technology. But now, I mean, you’ve completely persuaded me.
So please continue.
Ms. Moore: Well, first of all, I’m very glad.
Mr. Allen: Yeah. (Laughs.)
Ms. Moore: We – I mean, I think the important part for us is not that we’re trying to, say, surge model developers to us.
Mr. Allen: Yeah.
Ms. Moore: We’re not saying that we need to have our own internal AI development team. What we need is for users to better be able to engage with the models that exist out there. So that means that they need to be able to label new data sets that they think are relevant to them. They need to be able to push those label data sets to then retrain a model to look for something different.
Again, the sort of classic unclassified example that exists is like a picture of a plane from the top, and you’re looking for a plane, and then if you put tires on top of the wings, all of a sudden a lot of computer-vision models have difficulty identifying that that’s a plane. If you were able, in that single moment, for a user to say I identify that the adversary has changed what they are doing visually, I am going to go back and start labeling to at least adjust for maybe I’m looking for a different type of plane shape or I’m looking for a nuance that accounts for a change in coloration on top of the wings. That may be able to get me there.
But if it takes you six months to get to that answer, and then the next day they say, oh, perfect, you didn’t like tires? OK, I’ll put something else on top. And it breaks the models right again. We’re spending inordinate amounts of time on computer vision with very little to gain.
So low-code, no-code models is really what we’re interested in. And what we’re doing is running a competition with different vendors who are supplying low-code, no-code computer-vision models. We got about 100 folks that submitted to us through CDAO’s commercial-solutions offering. Again, we are so grateful for the partnership of so many of these great organizations. None of what we do is alone. We then down-selected down to five, and those five are going to give containerized models over to five user groups at CENTCOM next month. And they are not going to be allowed to speak to them after that, because the users are going to be grading them on usability.
You’re also going to be graded on whether – it’s a similar construct in counter-UAS in many ways. We’re interested in whether you can detect UAS. We’re interested in whether your model can detect. But more importantly, we’re focused on the user experience, because, as a command, that is what makes or breaks it. And so we’re really looking forward to the exercise series next month where they’ll be able to work with these models, and we’re going to see. The hypothesis is that users could really take a more substantive role in shaping and iterating on models. Maybe we’re wrong. But the only way to know is to try.
Mr. Allen: So I want to highlight what is not quite making sense in what you just said, because, you know, for a lot of this conversation we’ve been talking about the need to tighten the loop of communication between the folks developing technology and the folks who are operationally using the given technology. But here you just talked about cutting off communication between the model developer and the user community. So can you sort of elaborate, like, why you do it one way this time, why you do it that way another time?
Ms. Moore: A hundred percent, because I appreciate the distinction. And it’s important to foot-stomp that when you are testing something for a user experience, we are trying to set the – that is a different test than testing whether a tool works in not being a primary. So when the primary test is I want to see if this targeting-software application helps our users move faster, it is useful to have the engineers side-saddled with them to work on the development of it.
However, when the one test that we’re looking for is could a user who is forward, who doesn’t have access to an engineering team, use your tool and feel their way through it and be able to use it, that requires a different context for the experimentation. That means that a vendor will be separate for that period.
That said, we are a hundred percent then going to have after-action reports where they go back to the user and say what was that experience like? And they will get an abundance of feedback that will then drive iteration for their technologies and also the way that we execute our experiments. But we’re trying to make sure that we are focused on the right piece of the experiment, so to speak.
There are other experiments that we’ve done with computer vision with all the services as well where the focus has been – the only thing I care about is whether the model is right at the highest degree of percentage. And there, in those cases, absolutely. The modelers are going to be side by side with our users to explain whether they’re using the model right or not. But in this case, the focus is, actually, is it possible for a user to be able to do this without having the training wheels of an engineer side by side? Slightly different construct.
Mr. Allen: I see. Makes sense.
Dr. Miller: And I want to jump on that. Not because Sky didn’t explain it well, she did, but because this is an area where the Army is also trying to get really far into, because we lack the technical talent, en masse, to support every use case. The notion of self service, where a user can sort of help themselves before they have to go somewhere else, that is the technology that we’re going after. It is not the – it is not the – as Sky said, it is not the specific computer vision model. Like, we can get really finite there and take a long time. It is the ability to say, here is a platform that we have given to you. Can you help yourself before you have to go to someone else? That is the real niche, new use case that we’re describing.
And the Army stood up its linchpin program. It’s sort of in its nascent stages trying to figure out what is happening next. But I cannot stress enough, the era of putting a field service rep or a field service engineer everywhere all the time for every use case is over. We just, one, can’t afford it. Two, not realistic for large-scale combat, where we haven’t built up fobs. We have a lot of GWOT, global war on terror, hangover, where we forget that all of those bases are big. They are under attack. We are in contact in CENTCOM. But if we think about moving out in Europe, like we’re seeing in Ukraine, if we think about moving out in the Pacific, that is not the case.
Mr. Fanelli: Practicing how we play, or practicing how we execute, has allowed us to get much more laser focused on how we measure. And so, if it is user experience, as Sky laid out, if it’s operational resilience, now we can isolate and say, hey, what did this do to the resilience of the system? When we talk to, hey, how’s the cyber on this? Well, we can be a little bit more secure. What does that do to uptime? What does that do to failover? From an adaptability perspective, what is – based on everything that we done at the abstracted levels below – what is the response time? Are we bringing people on who can allow this to go in three days or three weeks? What does that look like?
And by looking at the externalities of the problem and saying, yep, we know that urgency is up, we know that tech advancements are up, we know that adoption is not where it needs to be. What are the through lines that are the highest prioritization, the highest ratio, that improve the situation? And now we’re making data-driven decisions to allow something to go faster, or really how to prioritize based on those numbers.
Mr. Allen: So recognizing that AI is not the right solution to every problem, you know, we’ve got Desert Sentry, which is actually working on a relevant AI use case and thinking about what is the right way to deploy it. What are you excited about in your role as CTO at the Army, CTO of the Navy, you know, in terms of AI applications? And how are you connecting that to, whether it be Desert Sentry or other combatant commands?
Mr. Fanelli: So what we’re doing is we’re running two things, a series of structured pilots and a series of structured challenges. So I’ll talk about the pilots first. In this particular case there are some places where for every pilot we require three different leads, the pilot lead to run it, the operational lead and champion to make sure that this day by day is a validated use case and we’re taking in all the contextual information, and then a transition portfolio.
Mr. Allen: These are all Navy –
Mr. Fanelli: These are all Department of the Navy, yeah.
Mr. Allen: Yeah. OK, Department of the Navy.
Mr. Fanelli: So – but at different locations, in different situations, yeah. So the receiver – so the portfolio or program office lead is there pulling the solution into acquisition. So across those three, they are coordinating to ensure, from a use case perspective, that we get those across.
So we have a few dozen AI pilots, structured pilots, that we’re pulling through. Some of those are tech driven. Some of those are use case driven. All of those are validated. And so we have those in our horizon three and horizon two. The point is, this is a garden as opposed to building a building. And so if either the maturity of one of those solutions comes forward or the need pulls it forward, then we can fast forward on some of those.
Mr. Allen: Can you help me understand, like, a little bit how big these pilots are? Is this, like, $100 billion or like 30 bucks, somewhere in between?
Mr. Fanelli: Closer to the latter. (Laughter.)
Ms. Moore: Thirty-five.
Mr. Fanelli: Yeah, yeah. (Laughs.) Most of the pilots are scoped for three months’ worth of work. We’re talking generally about commercial off the shelf solutions and something on top of that. So what we’ll do is we’ll take a validated use case and product and then pull it through. So particular examples, potentially on the less sexy side, are no one enjoys help desk. We have many of those where we have streamlined towards Amelia AI, and then scaled that. That has made it from –
Mr. Allen: What’s Amelia AI, for those who are not familiar?
Mr. Fanelli: Yeah. This is a help desk tool that does more LLM-type functions, that reduces wait time, that reduces time to solution. So this is abstracting some of the pain that is customer service. On the back end –
Mr. Allen: And I think it sort of highlights how you kind of have that enterprise-facing role. You’re, on the one hand, very close to touching some war fighter applications with your collaboration with CENTCOM and elsewhere, but you’re also, you know, part of the CIO organization and are familiar with those types of challenges, and where technology can move the needle.
Mr. Fanelli: Anywhere that we can remove friction. And so this is, like, from an abstraction there. And then we have different developer teams who are using multiple ML ops pipelines, or machine learning operations pipelines. And so for this particular case, where we work with the Harbinger team or the Overmatch team, where we can show, hey, here’s how fast we can deliver, or here is the model time, can we connect those different groups? That’s a pilot that has long resulted in a shared service and, I’m hoping, soon an enterprise service.
Mr. Allen: Alex, you want to add anything else?
Dr. Miller: I love this conversation because you said it earlier, and I think we all sort of high fived each other in our minds, AI is not an ends. It is a means. And what we’ve been really deliberate in thinking about is, if you’re going to use AI as a means, it means you’re doing a couple things. That means you are making a decision, which means you have to know what the decision is. You have to know the metrics by which you’re going to make the decision. And sometimes it’s a yes/no, I’m going to do this or I’m not. And then the risk and confidence necessary to make the decision. And that is all generally commander’s risk to make or take.
So we’ve been – we’ve been – I hate to – I hate to say we’ve been doing AI, because that’s not what we’re trying to do. What we were trying to do is make as many people’s lives easier as we can. And a couple examples. Within the headquarters, we have this thing called the ETMS2. I can’t remember where it stands for. It used to be the task management tool. But it’s a most bureaucratic system ever made.
Mr. Allen: So this is task management as the tasker, the system?
Dr. Miller: The task – yes.
Mr. Allen: Yeah, OK.
Dr. Miller: But, if you open it up, there’s a bunch of things you have to read. And there’s lots of documents. And there’s summaries of those documents that some staff action officer has to type up. We’re going to start applying LLM to that. Very simple use case specific to that technology. So we’re not trying to make LLMs do things that LLMs weren’t made to do, just to summarize, hey, here’s what’s in those documents. Save staff time.
Mr. Allen: Now, I’m a little bit skeptical here. And the reason is, you know, large language models are a big, big data set that, historically, you know, is trained on all the open Internet. And then if you want that to touch DOD data, you have to have some kind of cybersecurity authority to operate.
Dr. Miller: Oh, I didn’t say put it on the internet.
Mr. Allen: Fascinating. That was – that was going to be my question. So, like, how is it that both of you have had LLMs actually touch real DOD networks, real DOD data? You know, did you blindfold the ATO officer? What was the approach here?
Dr. Miller: No. So the way that we did it is Leo Garciga, our CIO, he and I have been battle buddies since he was the JIEDDO J6, like, a decade and a half ago. We put it on the network. We put it onto the DOD’s enterprise. And then, through retrieval augmented generation, RAG, you feed it – you feed it the data that you want it to touch, and then it provides some recommendations. So it’s not just, hey, all of wiki. It’s actually, hey, train on those things that gives you context for the language. And then here’s the things we actually want you to leverage against that.
So I talked about the task management. We’re doing a very similar thing. We have this process called the AROC, the Army Requirements Oversight Council. The chief is the AROC chair. He approves requirements. Well, what we don’t do very often is decertify requirements. So since 1979, the Army has generated 1,905 requirements. That’s everything from the M67 frag grenade all the way up through the most sophisticated technologies that you’re seeing going into CENTCOM.
What the chief and General Rainey, the AFC commander, said was, hey, how do we get rid of some of those and give the secretary and the chief decisions based on our resourcing? The easiest way to do that is take all those requirements documents, which are documents, and actually feed them to an LLM so you can start going, hey, what requirements are related here? Again, saving staff action officers’ time. So that’s backend stuff.
On the warfighting side, one of the coolest applications of AI I saw recently, we went down to the Joint Readiness Training Center, JRTC, at Fort Johnson with Second Brigade 101st Airborne. They are part of – they are one of our transform in contact brigades. And they worked with our software factory and Army Artificial Intelligence Center, and built a HLZ/PZ loadout app for TAC. And I said a whole bunch of acronyms, but what that really means is if you’re doing a long-range air assault with helicopters, so you want to know exactly where all the stuff and people on those helicopters are, from the time that they leave to the time that they get to where they’re going.
And right now, that is done in notebooks and Excel. (Laughter.) And I don’t mean, like – I don’t mean, like, Jupyter Notebooks. I mean, like, green field notebooks. Well, wouldn’t it be really cool if you loaded all of those things out, and then you knew exactly where, if you wanted this radio, you go to the pallet because it knew where it was, and automatically load balanced all of the aircraft, all the pallets? It’s not a sexy use case. It saved those troopers time, and I mean lots of time. And it made their lives easier, because then they – their commander said, I need this kit. And they went, it’s on this pallet, on this helicopter. We’re going to get it. Just one of the best – because not only were the soldiers excited to use it – and I mean, visibly lighting up excited – the soldiers who built it were excited to support their peers. It was awesome.
Mr. Allen: That’s a wonderful story.
Mr. Fanelli: And foot stomping, right, like, some of the best use cases of AI are more functional and outcomes-focused than they are sexy, right? We don’t do AI to come talk, like, at a panel, or here, right? (Laughter.)
Mr. Allen: Yeah, what people want, right, is, like, the sort of AI technology that looks like –
Dr. Miller: Skynet.
Mr. Fanelli: Skynet.
Ms. Moore: Skynet. (Laughter.)
Mr. Allen: Well, I was going to say, it looks like a James Bond villain, right? Like, we’re going to have some kind of AI that’s a laser off the moon, and that’ll allow us to, you know.
Mr. Fanelli: And so we’ll talk about that SCIF. (Laughter.) But the idea here is some portion of every person’s day is spent suboptimized. And where we can remove that friction and unleash humans to do human things and perform at a higher level, we want to do that. So to answer your earlier question, like, we have a lot of data. We have exquisite centers that kick off – sensors, that kick off more data than we can handle. So in some of these cases we’re using ML on those. We’re trying to do more edge compute so that we can do that locally and decentralize which aspects that can be decentralized, and then kick others back. We’re doing some really interesting things from, like, a CUI classification, so controlled unclassified information.
Like, no one wants to be a human classifier. If we can do this, we have some RAG applications where here’s the first 80 percent, here’s the Pareto dumb aspects of your brief, or your report. We want to pilot that at larger scale. And then working with the Air Force and the Army on some of the GPT pilots that are, like, secured without external data.
Ms. Moore: I think the, like, common threads across what you guys have described are very much treating AI in a fundamentally different way, where it is not handing you the answer but it is first triage, or it is a cog in a much broader wheel that is handing you a small piece of what ultimately needs to happen. And if we frame it that way for our users, it’ll help us all mitigate against risk because they will understand that they continue to hold a responsibility to check the outputs. Because I worry, again, sometimes, that we say AI is going to arrive and we’re going to hand it to you in a magic box, and it does your job and you can now leave the room. And the reality is that you must – must, not should, must – check the outputs every step of the way.
And so whether it is that first look at a document that someone in the Army is having to look at, whether it is looking at classification and giving that first cut of I think classification might be this, I think that declassification might be this, expressing that to the users and setting their expectations appropriately saves us in an inordinate amount of time down the road if otherwise we might build muscle memory where people say, well, this runs and I don’t have to be involved.
Mr. Fanelli: And that’s huge. I mean, so we talked about responsible AI, the trustworthiness, including over trust, and what it does to outcomes just means that we have better sense of what the outcome is as a result of that human-machine teaming. If we can get smarter at that, and the effects on cognitive load and where we focus our attention is, again, a more data-driven decision, even if that data is feedback from users who are saying, this is a very different – we can do AB testing, we can do side by side. Those loops are quicker.
Mr. Allen: And I love that you’re talking, you know, about productivity improvements, right, giving people back their hours – whether or not that’s the – you know, the use case that’s going to go on the next recruitment commercial because it’s so exciting, or – but, like, what really actually is making members’ lives better.
Sorry, Alex. You were going to say something?
Dr. Miller: The notion of trust is interesting and weird, and it’s weird because lots of people have opinions. I am still the type of person that when I turn on a calculator I hit 2+2= just to make sure it’s still four, because it’s just a – it’s a –
Mr. Allen: It could have changed.
Dr. Miller: Yeah, exactly. But when we talk about trust, we act like we never trust systems ever, and that’s simply not true. What we are trying to do is make sure that we don’t break a user’s fundamental trust in our ability to deliver up front because – and Sky talked about computer vision. Computer vision is hello, world for AI and the Department of Defense. But the first time you give not a not-good answer, a really bad answer to a user, they will never trust that system and that box – potentially, you as a deliverer – ever again. And it’s not because you’re bad at your job or the system’s bad; it’s because they don’t want to spend more time being paranoid double-checking the answers because of one bad experience. So trust is not necessarily about providing a whole bunch of metrics; it’s about that first time or second time or third time that a user gets something and they don’t get crushed by their boss, they don’t have a really bad output, somebody doesn’t lose life or limb because of an output of a system. But I – we started talking about trust, and I just wanted to talk about that a little bit.
Mr. Allen: That’s great.
So I love that we have the three of you here. It’s a real reflection of the collaboration that is, you know, maturing between combatant commands and the services when it comes to the technology part of the story. I just want to, you know, ask, hypothetically, if there was a fourth or a fifth organization at this table, you know, who are your other big collaborators that are helping you, you know, advance the technologies that you’re trying to drive outcomes with?
Ms. Moore: We would be – we have to list CDAO first. The Chief Digital and AI Office has been an exceptional partner from start to finish. They are – they have provided us a drumbeat for our exercises through their Global Information Dominance Experiment. They have provided us opportunities and consulting services, so to speak, about how to think about our own technology integration journey and how we mature as a command. They have provided training opportunities. CDAO has been an exceptional partner.
There are also other services that are not seated at this table that involve themselves in Digital Falcon Oasis. The Marine Corps has been a really awesome partner for us. The Air Force is also heavily involved through our Air Force component. We really are very fortunate across the board.
And then NGA. NGA is so important to mention because, for computer vision, they hold one of the most significant programs in that space, Maven, right now. And they have been excellent partners in feedback with us, because I think when we talk about Desert Sentry there is a use case for that low-code/no-code really quick iterating threat, but there’s also an important use case for more sustained threats where you’re looking for the same thing consistently. And the models that they are providing do that to an exceptional degree, and they have always been really openminded to our feedback of where things could improve, about getting the speed in the way that the users experience their tools differently.
Every partner that I’m describing here and the ones at this table we find valuable when they are openminded to our feedback and treat this as an intellectual partnership. It is neither that they are handing man/train/equip to us and then backing away from it; it is not that we are simply doing our operations in a vacuum; it’s when we have a really constructive dialogue, which I think we’ve built over these exercises, that we all get better.
Mr. Fanelli: And I would say the Defense Innovation Unit and USD(R&E), so Research and Engineering, have allowed us to put more irons in the fire. We needed more shots on goal. Everyone says it’s OK to fail fast on these low-risk opportunities, but if you only take two shots a year you’re a zero. If we have 25, if we have 50 irons in the fire, then we can do portfolio management and pull the best things through.
And so specifically this year, DIU and then USD(R&E) through APFIT and some other programs have allowed us to pull more forward for transition, or to allow us to get over the valley of death. So I’d say those battle buddies are – if we’re good at innovation and we struggle in innovation adoption, that is one way that we’re paving the path.
Mr. Allen: Is there anything you’d specifically – like, because you talked about having, you know, more irons in the fire. Can you just sort of give an example of an area where you’ve collaborated with DIU, for example?
Mr. Fanelli: Sure. So the idea of how we’re doing cyber has changed drastically specifically for us over the last 18 months. And so we have some cyber anomaly detection, but we have gone to DIU that says, hey, most people are fully tasked with their day job; what are some leap-ahead opportunities? We went out and we contracted with Defense Innovation Unit on some hybrid: How do we go from cloud to on-prem, and what does the SIPR to NIPR transfer process look like, and can you help us envision that? They went out, they pulled in DISA, they pulled in Army, and we did an evaluation. And we are looking at use cases right now where that applies to many different communities within the Navy, and then potentially has an enterprise stake.
There are other DIU projects that we’re doing. What is it, mobile virtual network operations? And so within Guam we pulled in a technology that was below our cut line. They helped with funding and piloting of that. And this is something that is allowing us to connect with more partners and increasing our resilience there. So what they’ve done is where there are things that aren’t necessarily making it above our cut line, or they’re on our radar but we can’t get there, they have force multiplied specifically in cases that help the combatant commands and help us to shape some of what could be within our PEOs and program offices and portfolio offices.
Mr. Allen: Alex, anything you’d like to add?
Dr. Miller: I actually have three organizations that I would bring.
First one is OSD’s Office of General Counsel, OGC.
Mr. Allen: Really? Not – you know, not many people are – (laughter) –
Dr. Miller: No, and it’s – and it’s –
Ms. Moore: A gun is to his head right now. (Laughter.)
Dr. Miller: And it’s because we do a lot of technology gatekeeping in the DOD based on the federal management regulations. And what I mean by that is –
Mr. Allen: Yeah. Some people – some people, like, look at the CIO organization like they’re the bad guy, but usually they’re following some random law written in the 1990s.
Dr. Miller: They’re following some regulation.
Mr. Allen: Yeah.
Dr. Miller: And what I mean by that is if I took – if I said I need a cup and this cup existed, if I have never used it in combat or warfighting, somebody’s going to go: That cup is TRL-4.
Mr. Allen: Prove it, yeah.
Dr. Miller: And I’m going to – I’m going to go: No, it’s real. It’s right there. I can use it. I drank out of it. But because of the way the rules are written, somebody’s going to go: No, no, no. You need RDT&E to bring that cup to bear. Even though I’m going, no, no, no, I’m using it. (Laughter.) So just bringing OGC along with us.
The second organization –
Mr. Allen: Because they’re part of your ability to hack the bureaucracy at scale, right?
Dr. Miller: Right. Right, that’s exactly it.
Mr. Allen: So push back on regulations. OK.
Dr. Miller: And shame on us if we are not helping our colleagues, because not everyone’s a technologist and they shouldn’t have to be to access technology. So shame on us if we are not going and going, hey, here’s what’s new, here’s what’s relevant, here’s what we mean by these terms.
And I’ll give you one: Development as written in the FAR and software development are fundamentally different concepts; they just use the same term. So making sure that they understand what we’re doing, and why, and the rationale, and making sure to – because they work on behalf of the secretary of defense, and they are protecting the secretary and the warfighters. So making sure that they’re not taking undue risk is what – is part of our job as well.
The second one is the Hill. Like, we have to continue to have conversations about what – everything that we’ve described is about flexibility and adaptability – adaptability to the fight and flexibility to move technology to the warfighter. And right now, that is not how we are funded. That is not how the budget works. We build line items, we do a program objective memorandum five years out, and then we pretend that we’re really good at guessing at what’s going to happen next. So making sure that our colleagues on the Hill who think that we have the flexibility to do this; who are really turned on to say, hey, go forth and do it; but at the end of the day we’re still bound by the interpretation of those rules.
And then the third one – and this is – this may be a little bit cliché – making sure industry knows that we want them at the table. They are our partners. We are not trying to shut anyone out. The government is really bad at developing most technology and we are really bad at competing with industry, so don’t do it. So making sure that they know, hey, this is still a partnership; we are still team America.
Mr. Fanelli: To that point, I mean, you have two sensational leaders here that are super connectors. And so we’ve all found “yes, if” people throughout the community, and the challenge now is how do we institutionalize that and how do we build on those. I mean, there are compounding wins that are coming out of this like Schrödinger’s cat gets the mouse, but can we make this easier for everybody. And so those are next steps, is getting the operational wins, getting the outcomes, and then making the barrier of entry for winning lower.
Mr. Allen: I think that’s a lovely point to end on. So I want to thank the three of you for coming to CSIS and sharing your insights and experiences, really moving the needle, as we said, on these incredible challenges. And I wish you all good luck with these three series of exercises in deserts and oasises that we talked about, and with the broader challenge of accelerating technology adoption for mission impact.
So this concludes our event. The replay will be available on YouTube and on the CSIS website, CSIS.org. Thanks.
(END.)