Decisionmaking at the Speed of the Digital Era

Available Downloads

The Issue

The United States has stated it is in a strategic competition with China. Analysts and actors identify the need to rapidly iterate through concepts and capabilities to develop new, more effective means of engaging in that competition. At present, however, the Department of Defense underutilizes publicly available data and the software development community to build tools that enable faster modeling, hypothesis testing, and variability analysis than traditional wargaming or modeling alone. This brief describes the speed and utility of developing a simple software tool to stress test a hypothetical People’s Republic of China (PRC) surprise attack against U.S. facilities in the Indo-Pacific.


Introduction

Russia’s invasion of Ukraine is reminding the world that missiles can help militaries achieve objectives even when ground and air forces are kept far away. The degree to which Russia’s strikes are steadily destroying Ukrainian targets highlights the importance of considering how such weapons may shape the conduct of future conflicts in Europe and in other regions.

The Biden administration’s national defense strategy, still classified and with only a brief summary available to the public, identifies China as the United States’ primary security concern. Even as the Department of Defense (DOD) talks the talk of preparing for future conflict, it is far too resistant to adapting the systems or picking up the pace that served it well for the 30 years since the end of the Cold War.

The past 10 years have seen a steady cadence of reporting on highly classified and time-consuming wargames showing that the United States consistently “loses” to China. The results, easily summarized as “bad!,” lack sufficient publicly available detail to enable informed debate on how best to resolve the potential shortcomings.

Wargames that are classified or complex can offer benefits to policymakers, though frequently to a small number of highly technical individuals; however, open-source analysis and DOD’s own publications create a wealth of information which can—and should—be closely analyzed to encourage DOD leaders, lawmakers, and the public to consider how best to prioritize limited resources, including money, time, and personnel. Combined with simple and affordable modern software capabilities, this information should be leveraged to improve and focus more time-intensive wargaming efforts.

Photo: Greg Baker/AFP/GettyImages
 

That is why, over a six-week period, the authors developed a relatively simple and low-cost tool to assess what might happen in the first hours of a potential future conflict in the Western Pacific. The model assumes China conducts a surprise missile attack using only its land-based People’s Liberation Army Rocket Forces (PLARF). Drawing on DOD’s annual China Military Power Reports and available data on PLARF operating locations, organization, and capabilities, the study team created an algorithm to compute the most likely U.S. and allied targets along with a rough assessment of the operational consequence of such strikes.

Despite an initial hypothesis that, “it won’t be that bad,” this analysis suggests that early phases of a conflict could be very bad for U.S. forces and facilities in the Western Pacific.

Methodology

Leveraging publicly available data on PLARF ranges, operating locations, and inventories, the study team created a model for PLARF capabilities able to engage in a first strike against U.S. and allied facilities in the Western Pacific and as far away as the continental United States. Those facilities were drawn from previous work and imagery analysis and included locations, runways, headquarters facilities, missile defenses, and estimates for other forward-deployed forces. First, the algorithm calculated which U.S. and allied bases were in range of a given launch site (the model does not currently account for the mobility of PLARF assets). Then, the team assessed a value for each site based on the total U.S. capability and capacity there as well as the likelihood of the strike disabling or destroying its target (mission kill). Finally, the algorithm iterated through launch sites and potential targets to determine the optimal combination of missiles and targets to maximize the PLARF’s ability to neutralize U.S. forces.

The program offers users four scenario choices for a PLARF first strike on U.S. and allied bases in the Pacific. From most to least expansive, scenarios would allow the PLARF to

  1. Strike at any base within range of PLARF missles, including Guam, Hawaii, Alaska, and the lower 48 states;

  2. Strike bases in the Pacific, excluding the lower 48 states but including Alaska, Hawaii, and Guam;

  3. Strike bases including Guam but excluding Alaska and Hawaii; and

  4. Strike bases not in U.S. states or territories.

Limits

This program and the resulting analysis are a proof of concept for digital decision modeling and are not intended to be a definitive answer to exactly how PLARF assets might operate. Several assumptions were necessary to develop this initial prototype, but they may limit the utility of the current program to assess risk. These assumptions include the following:

  • PLARF missiles are assumed to be 100 percent accurate.

  • The missiles target fixed U.S. facilities rather than U.S. assets.

  • PLARF assets are all assumed to be static.

  • U.S. missile defenses are accounted for in a very rudimentary way.

  • The model assumes no prior warning for U.S. or allied forces.

  • The United States receives no benefits from space or cyber capabilities.

In addition to addressing some or all of the above limitations, additional refinement—or built-in flexibility—could offer both greater confidence in assessments and means for conducting sensitivity analysis across a range of variables. For example, within the program’s constraints, the importance of a submarine base in the Second Island Chain is roughly 10 times higher than that of a command and control node in the same area. Such a value was analytically useful to ensure the code ran correctly. Refining those values or allowing the user to define those values, however, would enhance the exercise’s utility and increase decisionmaker confidence in the outputs. Similar assumptions are present at several points in the program.

It is important to note that the program excludes PLARF locations believed to be nuclear sites (green stars) from the analysis. This was based on a judgment that a nuclear first strike—whether plausible or not—would lead to a different scale of U.S. response and so was not helpful to analyze. There is some uncertainty in the open-source literature about which PLARF missile locations are purely conventional or purely nuclear. Because of the ambiguity, there is a chance that the program assigns a nuclear site to strike a U.S. base. Dual-capable bases and assets are assumed in this model to fire conventional warheads as a “worst case of the best case.”

Results

Employing the scenarios described above, and subject to its limitations, the software provides results that can be reviewed and analyzed. The first (including the lower 48 states) and second (including Alaska, Hawaii, and Guam but excluding the lower 48 states) scenarios returned the same results, as the PLARF did not target bases in the continental United States. The third scenario (including Guam but excluding Alaska and Hawaii) resulted in a shift by the PLARF to attack U.S. facilities in Busan in lieu of attacking U.S. bases in Hawaii. The fourth scenario (excluding any U.S. states or territories) reallocated PLARF missile locations from striking Guam to striking U.S. facilities in Misawa and Yokota.

Validation

Despite the low-fidelity nature of the current model, it provided several results that demonstrate the utility of this approach. First, it provides detailed information on targeting choices, missiles fired by launch base, and the number of missiles fired by type. Second, it provides a way to assess, roughly, the effect on U.S. bases of such an attack. Third, it demonstrates that it is possible to leverage digital-age technology and open-source data to develop assessments that are somewhere between the “back of a napkin” and a highly classified combat simulation. Finally, this model was developed with a minimal amount of software developer time—less than 80 hours—and most of that time was spent gaining familiarity with the weapons systems and concepts of employment to understand how the logic of missile targeting could work. This final item is important because future iterations, built by a programmer familiar with the concepts or capabilities, could be developed very rapidly.

Figure 4 shows a portion of the targeting selection output with the number of missiles fired at a given target by a given base. This information is intended to avoid the black-box effect that is often, and rightly, a concern in computer-generated models. Bases are composed of a harbor, headquarters, and runway. Targeting results are printed for each.

Readers should note that the software returns values for a location’s harbor, headquarters, and runway even if it does not have such an asset. For example, an inland U.S. base may show that its harbor is being targeted with zero missiles in Figure 4, or that its harbor was destroyed in Figure 5. That is simply how the proof-of-concept software accounts for a location without a capability, not an actual use of missiles to destroy it.

Figure 5 shows the anticipated effect on U.S. locations for the hypothesized attack. Outcomes are presented for each of the three components (harbor, headquarters, and runway) unless all three components are disabled, in which case the base will return “base is disabled,” as seen in the Yokota example below. It is worth reiterating that the model likely overestimates the accuracy of PLARF missiles and may underestimate the effect of U.S. missile defense capabilities.

Benefits

Software tools such as this can provide value added, even when relatively simple, to rapidly cycle through potential variations in strategies, capabilities, or operating environments. Decisionmakers are understandably concerned about turning over human judgment and experience to machine calculations. At the same time, many defense and military leaders speak about the pace at which the world is changing. To keep up with—and possibly even lead—that change, U.S. decisionmaking will need to be accelerated beyond what traditional methodologies can do alone. Many of these concerns could be addressed relatively simply in early-stage models in ways described in this brief.

Such software enables wargame developers to focus on a better-identified concept of operation and provide a range of alternative benefits to decisionmakers, including the ability to

  • rapidly test the impact of various capabilities or investments to assess them against each other;

  • target wargaming on better-identified questions or concepts;

  • test assumptions about operational benefits in one theater versus another;

  • incorporate complexities as constraints to be identified and worked through rather than hand-waved away; and

  • provide an unclassified way to talk through implications of major decisions that are currently constrained to highly classified engagements.

By incorporating a series of different assumptions—for example, by having the scenarios vary the capacity of missile defense or adjusting the hardening of U.S. facilities—the software could rapidly produce a range of different outcomes, allowing decisionmakers a clearer sense of the impacts of one decision versus another. A follow-on step could include assessing the cost, time, and political feasibility of the options.

By iterating through a range of potential decisions prior to beginning wargame development, wargaming professionals could design wargames and tabletop exercises to explore more targeted aspects of potential decision-making or operational engagement. In this way, software tools should be seen as complements to, and accelerators for, wargame development. Wargames are large efforts to develop and (as DOD often employs them) are time-consuming to execute. Ensuring that those efforts are focused on the highest-priority possibilities will allow faster, more informed decisionmaking.

Software tools can be rapidly adapted to explore similar situations in different theaters. For example, the software for this analysis takes structured data inputs on capabilities and locations and works through them based on identified criteria. Running a scenario from a different theater is as simple as using different capability and location inputs and adapting the targeting prioritization criteria. In practice, this suggests that a computer program could be rapidly repurposed from a threat in one theater to a threat in another in a manner of hours, not weeks.

The realities of conflict are often abstracted or “hand-waved” away in wargames to focus on higher-priority concerns. These issues can include challenges in communications, access, or logistics. Well before a crisis begins, software models provide the ability to identify the operational limitations of such constraints, imposed either by the operational environment, attrition, or capability or capacity shortfalls at the onset of a contingency.

Perhaps most helpful, models built on unclassified data can provide a way to discuss real issues facing DOD and do so in a way that allows senior decisionmakers and congressional overseers to ask more informed questions—and be prepared to hear and better evaluate the responses—enabling a higher-paced and better-informed decisionmaking process. An additional possible benefit would be leveraging the open-source software community to review code—or portions of code—to refine and improve the software, allowing analysts to benefit from the current cutting edge without the cumbersome requirements-setting process.

Future Iterations

This analysis rests on a proof-of-concept piece of software. Developing the software further could provide more credible and plausible information on which to understand the future operating environment and how prepared DOD is to meet it. Several specific areas stand out for more detailed efforts:

  • Enhance the sophistication of the missile model. This includes incorporating mobility of assets, enabling variation in the assumption of PLARF missile accuracy, and employing more sophisticated U.S. missile defense capability modeling.

  • Explore the potential impacts on the survivability of a wider force. Rather than just considering a U.S. force during a surprise attack, a model could account for U.S. and allied forces across the theater over the initial stages of a conflict.

  • Adapt the model to explore current and expected future logistics force capability and capacity. This could include both organic military capabilities and possible civilian logistics capability, as well as the operation of both in contested environments.

  • Move from single-answer results to a range of probabilistic results. The current program returns a single scenario that it deems most likely to occur, but future versions could be modified to incorporate degrees of certainty in the results. This would make decisions more transparent and offer some accountability for the uncertainty present in war.

  • Use the model in new theaters. The analysis could be applied to new theaters of operation, including U.S. European Command and U.S. Central Command.

In each of the above follow-on areas, the opportunity to rapidly identify and discuss limitations (and opportunities) for the current joint force, without being beholden to service-specific assumptions and models, would be a helpful overlay to existing classified work. Such steps could enable a greater ability to compare relative benefits of dissimilar investments for the overall joint force—for example, whether the next marginal investment would be better made on fighter aircraft, surface ships, missile defense, or logistics capabilities.

By enhancing DOD’s capability to make decisions at twenty-first-century speeds, the United States will be better positioned to make more informed investments in those areas that best enable competition. Tools of this nature offer analysis at the speed of relevance with the transparency needed to communicate in and across government.

Laura Bocek is a former intern with the Defense-Industrial Initiatives Group at the Center for Strategic and International Studies (CSIS) in Washington, D.C. John Schaus is a senior fellow with the International Security Program at CSIS.

This brief is made possible by general support to CSIS. No direct sponsorship contributed to this brief.

CSIS Briefs are produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2022 by the Center for Strategic and International Studies. All rights reserved.

Image
John Schaus

John Schaus

Former Senior Fellow, International Security Program

Laura Bocek

Former Intern, Defense-Industrial Initiatives Group