Catastrophic Attack and “Reasonable Probability”
February 24, 2017
“The majority of people are timid by nature, and that is why they constantly exaggerate danger."
Karl von Clausewitz
Technology creates both opportunity and risk, but it is the latter that often gets the most attention. This reflects weaknesses in the public process for analyzing risk. Research in the 1980s showed how public discourse and media misestimate and exaggerate risk. If anything, this tendency has become worse since then given the hyperactive news cycle. A more structured approach can help us more accurately evaluate the risks to public safety and national security created by new technologies.
The factors we should consider include vulnerability, attacker intent and capability, and the operational environment that shapes attacker calculation of the cost and benefits of using a new technology. The operational environment (and the limitations it imposes on potential attackers) is particularly important and helps explain why, despite more than a decade of warnings of various calamities, none have occurred. We should also look for precedents, and if there are none, ask why. This approach to risk assessment from new technology works well with cyberattacks, dirty bombs, bio-weapons or other potential threats.
Each factor requires a different weighting. Capability is not a good predictor of risk. Simply because someone has a capability does not mean that they will choose to use it. Improved capabilities offer increased “returns” to an attacker, but may not reduce the risks and constraint the attacker faces and does not change the larger calculation of effectiveness. For example, it may be easier to make biological weapons because of advances in technology, but unless a new design addresses the problems of dispersal and agent survivability (which are degraded by environmental conditions), a “better weapon” may still be unappealing to an attacker when compared to other modes of attack that have a greater probability of successful deployment. A terrorist, if they are competent, can predict the effect of a conventional bomb more much accurately than some exotic weapon.
Similarly, vulnerability does not adequately predict risk. Vulnerabilities are common in modern society, but infrequently produce harm, much less mass effect. Mass effect (thousands or millions of casualties) is difficult to achieve. The only examples involve mass aerial bombardment or nuclear weapons, which were expensive and required complex delivery mechanisms. Vulnerability assessments are too often unidimensional, and fail to take into account “substitution effect” where a vulnerability in one area can be mitigated by a strength in another.
Most industrial societies have developed agencies and rules that allow them to respond rapidly to contain and mitigate damage. For example, the prime minister of Norway recently pointed to the Black Death, which she said killed more than half her country's population. However, medical facilities have improved since the Middle Ages. When we look at the “pandemics” of the past few years, what is striking is how well they were managed and how few deaths resulted. The last major pandemic was the influenza epidemic of 1918 and health systems have improved vastly since then. In the last several decades, disease outbreaks have done the most harm in poorer countries with weak governance—and governance and wealth must be considered in any prediction of the risk from the outbreak of disease. This points to a serious shortcoming in most catastrophe scenarios, in that they underestimate the resilience of modern industrial societies.
The operational environment also imposes constraints and costs. An attacker has to build, test, transport, and deploy the weapon, using limited resources and operating covertly (if the program is to avoid an abrupt termination). Experienced commanders know that the simpler the operation requirements, the more likely an operation is to succeed, and this predisposes decisionmaking away from exotic attacks. Field use is affected by a number of variables. Bio-weapons, for example, are affected by temperature, sunlight, rain, and wind, which can degrade the effectiveness of the weapon. Dispersal, particularly if the goal is mass effect, remains a challenge. There are very few instances of successful biological weapons attacks and while some produced panic among the target population, none produced mass effect.
Intent is the most important predictor of risk, but in thinking about intent it is useful to duplicate, as far as we can, the decisionmaking process of terrorist commanders. They have limited resources and wish to use them effectively. They are impelled by motives that seek to express rage and produce dramatic political effects—in this, conventional bombs may be the most attractive weapon and gunfire the cheapest. These two attack modes are reliable and produce the desired political results. Terrorist decisionmakers also consider the probability of success in an operation, which often leads them to prefer tried and true weapons rather than some attack mode where they have little or no experience. A general desire to do harm does not mean there is an equal probability that every possible avenue for attack will be taken.
There is also an unconscious tendency among analysts to emphasize risks from the technologies they study—usually accompanied by claims that governments are not paying enough attention. It is almost a contest to assert that one’s own threat is the most dangerous, and if we rely on ad hoc, anecdotal, or incomplete assessments these assertions can at first be hard to dismiss.
The one terrorist attack that produced mass casualties used airplanes. The one nonconventional attack mode that terrorist groups are known to have explored is the use of chemical weapons. This may, however, have been due to unique circumstances, in the ability of jihadi groups, after the messy American invasion of Iraq in 2003, to recruit scientists and officers from Iraq’s chemical weapons program. It was the training and experience gained in Iraqi programs that provided the expertise needed to pursue chemical weapons use, something not duplicated for other nonconventional weapons. Bio-terrorism, dirty bombs, and cyberattacks by terrorists are better left to the realm of fiction. The more exotic the weapon, the less likely it is to be used.
James Andrew Lewis is a senior vice president at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2017 by the Center for Strategic and International Studies. All rights reserved.