Managing Risk and Technology Insertion in Crewed Tactical Jet Engines

Available Downloads

The Issue

China’s investments in its military turbofan industrial base may challenge U.S. dominance in fighter engine technology in the coming years. The question is whether the United States is making the necessary investments to ensure superiority, especially given its traditionally risk-cautious approach to military turbofan innovation and fielding. While the historic U.S. approach to innovation has been adequate to this point, policymakers may need to consider more assertive action to ensure that the capability gap between the United States and China is enduring. This brief examines four different case studies to understand the U.S. turbofan development history in the context of growing competition with China.


Innovations in military turbofan engines have enabled successive generations of U.S. combat aircraft to have a qualitative advantage over aircraft from peer competitors. As military planners look toward the next generation of combat aircraft, maintaining an edge in propulsion technology is vital to the ability of the aircraft to overmatch the competition.[1] Days after CSIS published a report titled Power Proliferation: The Global Engine Market and China’s Indigenization, a video emerged of Chinese officials from the state-owned Aero Engine Corporation of China claiming that the WS-15 engine had entered “mass production.”[2] While the truth of this assertion is not yet proven—and while that engine still trails latest Western technology—it serves to underscore the threat that China’s aerospace sector may pose.

For the first time since the end of the Cold War, the United States is now facing a strategic competitor with a comparable or stronger industrial base in some key areas, including critical materials. China is making investments and embracing a whole-of-government approach to try to close the gap between Chinese and U.S. turbofan engine technology.[3] While China has met with mixed success, and still lags the United States in terms of capabilities, its level of effort is cause for concern.[4] Evidence suggests that U.S. engine technology leads China by more than a generation, strategic surprise on the part of China remains a possibility. To understand how to best maintain an enduring advantage, this brief examines historic U.S. approaches to turbofan innovation and procurement. After examining four case studies, the brief draws broad conclusions about whether or not past approaches to jet engine innovation have the potential to maintain U.S. leadership in the industry.

Case studies of past efforts can help to illuminate process and policy drivers that shape the Department of Defense (DOD)’s decisionmaking and posture for strategic industrial competition across the jet engine industry. To understand the DOD’s approach toward risk and innovation, this brief aligns the cases along two independent variables. The first variable under study is whether or not the engine was equipped to an entirely new airframe. The second variable is whether the engine marked an upgrade in performance or was a generational leap in capability. While new aircraft and engine programs can yield substantial improvements in capability, the levels of risk and budgetary support may be prohibitive in some cases. Decisions on how to approach these investments are made at the most senior levels of the Pentagon, in consultation with Congress.

Table 1 lays out the innovation framework, with the selected cases used for examining engine innovation.[5] To capture approaches from across the DOD, these cases include U.S. Air Force (USAF) and U.S. Navy (USN) fighter aircraft programs. All of these programs developed and managed fourth- and fifth-generation aircraft, meaning that long-term changes in the structure of the industrial base, the DOD’s approach to acquisition, and technical capabilities should not have a significant impact across the cases. Finally, these cases cover products from the three major engine manufacturers: Pratt & Whitney, GE, and Rolls-Royce.

Table 1: Engine Case Studies

Each of these engines is the result of a distinct approach to innovation that incorporates numerous technologies. The engines are also evidence of different innovation and development approaches for fielding the best capabilities to advance the mission of a given aircraft. For a complete list of new technologies, and the evolution of jet engine technologies, see the appendix.

New Aircraft with a Clean Sheet Engine: F119

The F119 had its genesis in the early 1980s during the Reagan defense buildup. The DOD began a program for an Advanced Tactical Fighter (ATF), with the Advanced Tactical Engine (ATE) program proceeding parallel.[6] These programs sought to develop a generational leap in fighter aircraft capability that integrated new technologies, including stealth technology, supercruise, and advanced maneuverability.[7] Test programs throughout the early 1980s led up to the awarding of contracts ($202 million each) to Pratt & Whitney and GE for ground demonstrator engines.[8] These engines did not have to meet the same size and weight requirements as flight prototypes, and the contractors could decide how much advanced technology to put in the engines in order to meet USAF operational and budgetary requirements.[9] By allowing the contractors to designate their own technical risk-reduction approaches, the air force left more room for companies to focus on more proven or novel technologies.[10]

Remote Visualization

Remote Visualization

The XF119, as it was known during development, built on roughly a decade of research and was far more capable than the F100 engine that preceded it.[11] A central requirement for the engine, and the requirement that drove the design approach, was the ability to power the airframe to supercruise—to fly without an afterburner above the speed of sound. Pratt & Whitney was able to design an engine with only 6 compressor stages (as opposed to 10 on the F100), which not only increased performance but reduced life-cycle costs and technical complexity.[12] The engine first ran in May 1987 and squared off against GE’s XF120, which featured a similar compressor design but a complex, three-air-stream engine.[13] A three-stream, or variable cycle, engine can be attractive when aircraft need to fly at a mix of subsonic and supersonic speeds. These engines work by augmenting airflow characteristics and bypass ratios to achieve more efficient thrust and cooling.[14] GE’s use of a relatively new approach to military engine design, and therefore an inherently riskier approach in the eyes of the USAF, provided for a clear contrast between the two designs.[15] Pratt & Whitney’s effort to integrate more proven technologies, and shy away from less proven approaches, would later pay dividends.[16]

Advances in computing power now allow for complete engine fluid dynamics modeling, which not only makes it easier to design new engines but also to test new engine architectures to find breakthroughs in engine performance.

Both engines were designed with ease of maintenance in mind and featured 40 percent fewer parts than their F100 and F110 predecessors.[17] When it came to the flight test program, both engines flew on the YF-23 and YF-22 test aircraft. The XF120 achieved slightly better supercruise performance, with Pratt & Whitney validating a similar capability in a ground test. Pratt & Whitney’s development and test strategy revolved around being seen as the more technically mature and low-risk option, which led to an engine with 50 percent more test time than its competitor.[18] In August 1991, the XF119 was selected as the winner of the program, became the F119, and was paired with the F-22 airframe.[19] Pratt & Whitney’s decision to reduce technical risk through leveraging more mature technologies was not for lack of potential technologies to adapt, given some of the complex systems the company had designed for the 1960s-vintage J58 engine that powered the SR-71.[20] Additionally, the F119 did not incorporate the variable-cycle engine technology seen on the X120.[21] Pratt & Whitney engineers believed that the technology introduced too much added technical risk for the performance gains it offered.[22] Pratt & Whitney’s ability to accurately forecast the Air Force’s risk tolerance led to a design that was successful in winning the contract and supplying the program. This was a somewhat surprising result considering the revolutionary leap in capabilities that the F-22 promised to provide. This suggests that jet engine technologies must reach especially high technical readiness thresholds before they can be integrated into production engines at scale.

Ceramic matrix composites (CMCs) are lighter than metal and offer superior performance, enabling new engine architectures and greater thrust and fuel efficiency. CMCs are made of complex composite structures that are difficult to manufacture and require new supply chains.  

New Aircraft with a Derivative Engine Design: F414

During the mid-1980s, with the impending retirement of the A-6 Intruder fleet, the U.S. Navy began to develop a successor attack aircraft.[23] The Advanced Tactical Aircraft (ATA) program would eventually contract with McDonnell Douglas to develop the A-12 Avenger.[24] The A-12 was a blended-wing stealth attack aircraft that would have been among the largest in the history of naval aviation.[25] The complex composite design and cutting-edge stealth features meant that the aircraft faced considerable growth in costs while its performance continued to have unresolved challenges.[26] Due to such significant cost growth and a program timeline that continued to slip, Secretary of Defense Richard Cheney canceled the program, and the navy was left without a dedicated attack aircraft for the twenty-first century.[27]

Given the navy’s tumultuous experience with the A-12 program, it took a different approach to developing a requirement for its next attack aircraft program. The navy’s existing Hornet fleet had an attack capability, and to control costs, the navy decided to take advantage of that existing capability. The primary goals for the Super Hornet aircraft were to improve the Hornet’s range and bring-back capability and to provide additional growth potential.[28] The navy made the decision to treat the program as a major modification as opposed to a new aircraft program, but due to the aircraft having extremely limited commonality, this study treats it as a new aircraft. The navy treated this as a major modification in part because it was viewed as more likely to secure congressional funding and because it allowed for increased flexibility with the Defense Federal Acquisition Regulations.[29] The Super Hornet program office also relied on the same prime contractors as the navy had for the legacy Hornet fleet.

GE would be tasked with updating its F404 engine for this much larger and longer-range airframe. The navy’s risk-reduction and program approach emphasized leveraging proven technologies to achieve a modest increase in capabilities. This approach meant that GE could retain many of the core design elements that made the F404 successful while also making improvements to bolster fuel efficiency, achieving a roughly 20 percent increase in thrust to accommodate the Super Hornet’s larger airframe.

Relying on existing technologies and an incremental approach to system improvements were key components to making the program successful in managing both schedule and cost. Cost growth for the propulsion component of the program was roughly 10 percent over the program’s initial budget, far less than the F119 engines being developed at the time.[30] The F414 did, however, deliver important new capabilities to the aircraft, all while embracing a risk-averse design philosophy.

GE was able to integrate proven technologies into its existing engine architecture throughout the engine. GE also offered a staged development roadmap to incrementally introduce technical improvements into the platform.[31] First, the F414 intake featured a fixed blade and disk arrangement with special coatings to reduce the aircraft’s overall cross section.[32] Second, the fan blades were enlarged from the legacy F404 engine, which did not change the overall bypass ratio of the engine but did increase the mass of the airflow and helped to increase the engine’s overall thrust rating while driving requirements for a new engine inlet design on the Super Hornet.[33] GE also improved some of the materials in the core of the engine and was able to increase the turbine inlet temperature of the engine by between 231 and 332°F.[34] Along with an improved afterburner that captured some of the technologies from GE’s F120, these changes produced roughly 4,000 pounds of increased thrust while maintaining roughly consistent fuel burn and took the engine’s thrust-to-weight ratio from 6.02 on the F404 to 8.68 on the F414.[35]

By developing a derivative engine for a new airframe, the U.S. Navy and GE were able to control cost and maintain schedule while adding new capabilities. The new technologies that were added to the engine had the technical maturity to not increase risk while also being advanced enough to offer new capabilities. This model could be useful for policymakers and the DOD, who may seek to minimize upfront ambitions as they approach developing derivative engines for a new generation of tactical aircraft.

Existing Aircraft with a New Engine: F110

The TF-30 was the original engine for the navy’s F-14 Tomcat. The F-14 was designed to be a carrier-capable, all-weather fleet defense and air superiority fighter. The F-14 program emerged from the navy’s decision to halt joint development of a carrier-capable version of the F-111, as that aircraft did not meet navy requirements. The F-14 shared the same engines and a similar, but scaled down, swept-wing design as the F-111. The TF-30 itself was prone to stall when the F-14 was aggressively maneuvered or during demanding low-elevation evolutions such as carrier landings.[36] Navy officials concluded that the TF-30 caused at least 24 F-14A accidents.[37] This incident rate, combined with significant maintenance cost, contributed to growing dissatisfaction among navy leadership.

Single-crystal blades are metal components, often made of nickel, designed to withstand the extreme heat of jet engines. Pioneered in the 1960s, they are the result of complex manufacturing techniques, including vacuum chamber furnace casting and highly controlled cooling approaches. 

Since fielding the F-14A, the navy wanted to develop a better engine match for the airframe but had balked at the significant development cost.[38] GE decided to develop a scaled-down version of its F101 engine that would be suitable for tactical fighter aircraft.[39] The new engine, the F110, was initially designed as a replacement for the USAF’s F-16 but was quickly modified into the F110-400. The F110-400 was larger than the original F110 but shared key design features. GE then set out to convince the navy to repower the entire F-14 fleet.[40]

As early as 1983, the navy knew the TF-30 legacy engine presented serious safety problems. The navy’s F-14 program coordinator at the time recognized that the TF-30 generated increased risk for pilots due to the “very high probability of the engine stalling.”[41] These problems struck a chord with Secretary of the Navy John Lehman, who began to aggressively push Congress to turn away from the legacy engine on the aircraft.[42] Secretary Lehman was able to convince Congress that following the USAF’s lead on the F110 would drive down enough of the technical risk. Furthermore, because 28 percent of F-14 crashes were caused by the engines, the technical, budget, and schedule risk program was worth the cost in order to operate a safer and more capable aircraft that could reach its full performance potential.[43]

Due to the considerable risk that the TF-30 generated and because of a strong champion at the top of the service, the navy decided to accept the technical, operational, and budgetary risk of a new engine. The navy chose to move forward with a repower of the existing F-14A fleet and introduced a new version, the F-14B, which also featured the new GE engine. The F110 introduced several new technologies that resulted in a roughly 450°F increase in rotor inlet temperature, fewer compressor stalls, and increased operational availability.[44] These technical improvements resulted in a 62 percent improvement in mission range as well as other increases in capability in the aircraft’s mission envelope. Additionally, the F110 did not have any observable stall risk, and the maintenance cost per flight hour was roughly halved between the two engines.[45]

The navy’s decision to accept increased technical and program risk by opting for a new engine and a fleet-wide repowering did result in a significantly safer and more capable aircraft. The navy was compelled to make such a risky choice due to the flaws of the incumbent engine. By designing and producing a derivative engine, the navy was able to reduce risk, but integrating an engine into a new aircraft remained a complex engineering and sustainment challenge. Serious safety concerns with the TF-30 drove the navy to take the bold step of reengining the entire fleet while using an already tested USAF engine helped to reduce risk.[46] This case is far more about the failings of the incumbent engine, and how that drove the navy’s appetite for greater risk, than the success of the new engine. The F-14B and later F-14D became some of the most well-regarded platforms in naval aviation—the navy’s decision had clearly paid off, paving the way for the F-14 to remain in active service until 2006.    

Existing Aircraft Powered by a Series of Derivative Engines: F100-229 and the F402

The F100 engine was originally developed to be the power plant for the F-16 fighter, but it experienced extensive performance and reliability issues. In an effort to save the F-16 and F-15 fleets, the USAF launched what has been referred to as the Great Engine War.[47] The resulting competition led to an upgraded version of the F100, the F100-220, and the aforementioned F110.[48] The F100-220 was well liked by pilots and featured digital engine control, improved throttle responsiveness, increased reliability, and a steep decline in maintenance costs.[49] However, despite the F100-220 engine being an update to a previous engine, the new capabilities on the engine and the perceived technical risk meant that this engine was classified as a major capability upgrade on an existing platform for the purposes of this study.[50] This case study instead focuses on successor engines.

Roughly a decade after the introduction of the F100-220, Pratt & Whitney began to develop the F100-229 in response to investments GE was making in the F110 platform. This new generation of engine would have to extend the lifespan of the engine while integrating digital engine controls and increasing the core heat.[51] The F100-229 engine brought the F100 line into the twenty-first century through incorporating innovations that had, for the most part, already been seen in other engines across the industry.

Full authority digital engine control (FADEC) is a digital interface between the pilot and the engine that increases engine performance by optimizing the operation of the engine up to 70 times per second while decreasing the parameters that must be monitored by the crew.  

The F100-229 continued to integrate legacy technologies in order to gain capability while limiting risk and cost. The digital engine control technologies first employed on the F100-220 were integrated into the upgraded engine to better manage engine wear and performance.[52] The F100-229 also incorporated materials that allowed the engine core temperature to be increased by 130°F and that extended the engine’s designed lifespan beyond 4,000 hours. The derivative nature of the engine meant Pratt & Whitney was able to follow an accelerated test schedule and only tested the engine for 3,000 hours before fielding, a sign of the USAF’s confidence in the derivative design.[53] Once the engine was fielded, Pratt & Whitney continued to make modifications to address technical issues, including to the turbine blade configuration.[54] The proven track record of the F100-220 engine and Pratt & Whitney’s iterative approach meant that the air force received an upgrade capability at comparatively low cost. Part of Pratt & Whitney’s incentive to improve their offering was competition from GE.[55] However, because of the similarity between the F100-229 and previous generations, it was not able to incorporate the most cutting-edge engine technologies.

Another example of modest growth in capability over time is the Rolls-Royce Pegasus engine family, referred to in the United States as the F-402.[56] The Pegasus engine powers the British Harrier, the first mass-produced vertical takeoff and landing (VTOL) fighter aircraft, a version of which entered service with the U.S. Marine Corps in 1985. The Pegasus 11 was the first Rolls-Royce engine that was equipped on U.S. aircraft, and Rolls-Royce would go on to upgrade this engine numerous times as requirements for the Harrier grew.[57] Initial incremental changes were driven largely by safety concerns, and proven materials, namely single-crystal metals, where integrated into the core of the engine.[58] Driven by a need to increase the bring-back weight of the aircraft as ordnance became increasingly expensive, Rolls-Royce developed their final version of the engine, which allowed for digital engine controls and 15 percent more thrust while still maintaining significant commonality between engines.[59] Unlike the F100-229 engine upgrade, the F-402 program was not spurred by competition between engine contractors but was instead driven by operational requirements.

Observations in Relation to the Pacing Threat

Integrating new technologies into military aircraft is an innovation imperative for the DOD. However, declining investment in military engines as a share of the U.S. Air Force and navy research, development, testing, and evaluation (RDT&E) budgets over the last decade has called into question the DOD’s prioritization of jet engine technology.[60] The case studies examined in this brief present evidence that for aero-engines, the DOD has followed approaches that reduce risk. However, as China continues to make investments in their jet engine industry, it is worth examining whether this approach will yield the necessary improvements in capabilities to maintain technological dominance.

In cases where the DOD was seeking modest increases in engine performance, or a reduction in maintenance costs, a lower-risk approach integrating proven technologies was embraced across these case studies. The F414, F100-229, and F-402 adopted technologies that had been proven on different platforms before being integrated into these new engines. Given the technical complexity of engines and of the broader aircraft, this approach may have controlled topline costs and mitigated program risk. However, there may be some legitimate concerns that these more modest investments in improvements do not adequately incentivize innovation within the industrial base.

In cases where the DOD sought to capture significant increases in engine capability, they also favored the approach that featured more proven technologies. This reduced the risk for the F110-400 program by embracing chosen core technologies that had already been validated and deployed by the USAF.[61] Even in the case of what would become the F-22 program, in which the air force was explicitly trying to develop a generational leap in capabilities, the service eventually selected the more conventional engine.[62] While this tendency toward proven engine technologies and engine architectures may be a prudent step to manage risk in complex systems, it may have been a barrier to investments in useable innovations.

The engines examined in this brief were fielded during the Cold War, when the United States and Russia were investing heavily to advance propulsion technologies. Throughout this time frame, U.S. engines continued to be more reliable and more capable than their Soviet counterparts, suggesting that the DOD’s approach to managing risk while engaged in broader techno-economic competition was successful.

There are similarities and differences between historic strategic competition with the Soviet Union and the United States’ contemporary competition with China. China today, like the Soviet Union then, is a great power capable of marshaling significant resources toward centrally planned goals. Additionally, China, like the Soviet Union, suffers from systemic corruption, which can create challenges for investment strategies.[63] There are also differences. First, the Soviet Union had far more experienced aerospace engineers and technologists that worked throughout their aerospace sector.[64] This expertise may have meant that the Soviet Union had a greater capacity for indigenous innovation. Second, China is far more integrated into the global economy and enjoys access to Western machine tools and suppliers at a far higher rate than the Soviet Union ever did.[65] This may mean that China could have an easier time matching some Western manufacturing technologies, especially when innovation is not an imperative.

Innovation in the military engine market, especially in the tactical jet market, is tied closely with necessity. As China’s manufacturing capabilities continue to grow, new and more advanced U.S. turbofan engines for tactical aircraft may be one option to retain U.S. supremacy.[66] The DOD’s perception of China as a growing threat will be a key factor in determining future levels of investment in the turbofan market. When a generational leap in capabilities is required, such as the F119 or F110-400, risk tolerance grows and the window for innovation opens.[67]

The United States has two main options for strategies to safeguard its advantage in the aircraft engine sector.

The first strategy is to try to delay China’s ability to indigenously produce advanced fighter engines. This would require working to limit China’s access to advanced machine tooling by engaging with allies and partners to limit China’s import of new machines and spare parts. Restricting the import of machine tools would require a whole-of-government effort, in consultation with partners and allies across the globe, to ensure that China would not be able to access key manufacturing equipment. The challenges of managing transshipments and the dual-use nature of this technology both make this action complicated. However, it is one of the few tools at U.S. policymakers’ disposal to ensure that U.S. turbofan engines are superior to their Chinese counterparts.

The second strategy involves proactively investing in new engine technologies to ensure the United States continues to outpace China, therefore making it harder for China to close the relative gap in engine capabilities. This could, in theory, maintain U.S. primacy in the industry. The DOD has already been making investments in new engine technology, but the future of some of these investments is uncertain. Additionally, the key performance parameters for tactical engines are likely to change with an increasing emphasis on the Pacific theater as factors such as maneuverability and speed become less important and overcoming the tyranny of distance becomes key. This may force the DOD and tactical turbofan manufactures to look at new avenues for research and optimizing engine performance.

Ultimately, the DOD and the rest of the executive branch have a variety of tools to compete with China’s jet engine industry. The DOD could make additional investments in other key technologies, including submarines, electronic warfare, advanced missiles, and next-generation uncrewed systems, to better compete with China.[68] The four case studies in this piece examined the last time the United States faced a serious strategic competitor and how the U.S. acquisition and research and development systems prevailed to deliver capabilities that were superior to their Soviet counterparts. While China is making historic investments to close the technology gap, the DOD’s historic approach of balancing innovation and risk has proved successful; with careful stewardship, it may prove successful again.

Alexander Holderness is a research assistant with the Defense-Industrial Initiatives Group at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Gregory Sanders is deputy director and fellow with the Defense-Industrial Initiatives Group at CSIS. Nicholas Velazquez is a research intern with the Defense-Industrial Initiatives Group at CSIS.

The authors would like to acknowledge Jeremiah “J.J.” Gertler, Dr. Obaid Younossi, Dr. Larrie Ferreiro, and Dr. Frank Camm for thoughtful peer reviews, as well as Rose Butchart for her contributions to an earlier version of this brief.

This brief is made possible by support from GE Aerospace and general support to CSIS.

Please consult PDF for sources.

Remote Visualization

Alexander Holderness

Alexander Holderness

Former Associate Fellow, Defense-Industrial Initiatives Group
Gregory Sanders
Deputy Director and Fellow, Defense-Industrial Initiatives Group

Nicholas Velazquez

Intern, Defense-Industrial Initiatives Group