One Key Challenge for Diplomacy on AI: China’s Military Does Not Want to Talk

Photo: STR/AFP via Getty Images
Over the past 10 years, artificial intelligence (AI) technology has become increasingly critical to scientific breakthroughs and technology innovation across an ever-widening set of fields, and warfare is no exception. In pursuit of new sources of competitive advantage, militaries around the world are working to accelerate the integration of AI technology into their capabilities and operations. However, the rise of military AI has brought with it fears of a new AI arms race and a potential new source of unintended conflict escalation. In the May/June 2022 issue of Foreign Affairs, Michael C. Horowitz, Lauren Kahn, and Laura Resnick Samotin write:
The United States, then, faces dueling risks from AI. If it moves too slowly, Washington could be overtaken by its competitors, jeopardizing national security. But if it moves too fast, it may compromise on safety and build AI systems that breed deadly accidents. Although the former is a larger risk than the latter, it is critical that the United States take safety concerns seriously.
Such fears are not entirely unfounded. Machine learning, the technology paradigm at the heart of the modern AI revolution, brings with it not only opportunities for radically improved performance, but also new failure modes. When it comes to traditional software, the U.S. military has decades of institutional muscle memory related to preventing technical accidents, but building machine learning systems that are reliable enough to be trusted in safety-critical or use-of-force applications is a newer challenge. To its credit, the Department of Defense (DOD) has devoted significant resources and attention to the problem: partnering with industry to make commercial AI test and evaluation capabilities more widely available, announcing AI ethics principles and releasing new guidelines and governance processes to ensure their robust implementation, updating longstanding DOD system safety standards to pay extra attention to machine learning failure modes, and funding a host of AI reliability and trustworthiness research efforts through organizations like the Defense Advanced Research Projects Agency (DARPA).
However, even if the United States were somehow to successfully eliminate the risk of AI accidents in its own military systems—a bold and incredibly challenging goal, to be sure—it still would not have solved risks to the United States from technical failures in Russian and Chinese military AI systems. What if a Chinese AI-enabled early warning system erroneously announces that U.S. forces are launching a surprise attack? The resulting Chinese strike—wrongly believed to be a counterattack—could be the opening salvo of a new war.
In recognition of this risk, the National Security Commission on Artificial Intelligence recommended in its March 2021 final report that the DOD engage in diplomacy with the Chinese military to “discuss AI’s impact on crisis stability.” More recently, Ryan Fedasiuk wrote in last month’s Foreign Policy that “it is more important than ever that the United States and China take steps to mitigate existential threats posed by AI accidents.”
It is not only Americans who have written about the need for a diplomatic dialogue on this subject. In 2020, Zhou Bo, a senior colonel in the People’s Liberation Army (PLA), wrote an op-ed in the New York Times in which he argued,
As China’s military strength continues to grow, and it closes the gap with the United States, both sides will almost certainly need to put more rules in place, not only in areas like antipiracy or disaster relief—where the two countries already have been cooperating—but also regarding space exploration, cyberspace and artificial intelligence.
Other Chinese officials—including Fu Ying, the vice chair of the China’s National People’s Congress Foreign Affairs Committee—have published similar calls for U.S.-China diplomacy on AI risk reduction. Even the Global Times, a newspaper owned and published by the Chinese Communist Party, ran an English-language article in November 2021 with the headline “China urges regulating military use of AI, first time in UN history, showing global responsibility.” Clearly China believes that calls for diplomacy on military AI are good for its global reputation.
Substantive diplomacy on this topic is worth pursuing and, if successful, could meaningfully contribute to reducing the risk of a future U.S.-China conflict. With such loud public support in prominent Chinese venues, one might think that the U.S. military need only ask in order to begin a dialogue on AI risk reduction with the Chinese military.
Alas, during my tenure as the Director of Strategy and Policy at the DOD Joint Artificial Intelligence Center, the DOD did just that, twice. Both times the Chinese military refused to allow the topic on the agenda.
Though the fact of the DOD’s request for a dialogue and China’s refusal is unclassified—nearly everything that the United States says to China in formal channels is—the U.S. government has not yet publicly acknowledged this fact. It is time for this telling detail to come to light.
China’s refusal was not the first time that China’s diplomatic strategy on military AI included a gap between words and actions. China’s 2016 and 2018 position papers to the United Nations discussions on lethal autonomous weapons have supported a ban on the usage of such weapons, but not their development. If that is the case, it begs the question why are Chinese weapons companies—including ones controlled and owned by the Chinese military— building and exporting internationally AI-enabled weapons that openly advertise lethal autonomous capabilities.
And it is important that such risk reduction dialogues occur bilaterally between the DOD and the PLA, not just via the Chinese Ministry of Foreign Affairs’ public proclamations at the United Nations. The Chinese Ministry of Foreign Affairs is not a direct analogue of the U.S. State Department, which complicates its ability to authoritatively speak on behalf of the PLA. In the Chinese system, the Chinese military is a part of the Chinese Communist Party, not the Chinese government, which controls the Chinese Ministry of Foreign Affairs. Though both organizations ultimately have the same leader—Xi Jinping is both the president of the People’s Republic of China and chairman of the Chinese Communist Party—experience has shown that there is no substitute for direct DOD-PLA dialogue on military issues.
It is frustrating that China’s public calls for diplomatic dialogue and the cooperative development of new norms on military AI—which have continued even after the PLA’s multiple refusals to have such a dialogue—have attracted praise from those who are evidently not aware of the gap between public rhetoric and private reality. For example, Michael Woolridge, an AI researcher at Oxford University, highlighted China’s public diplomacy on military AI in his recent book as encouraging evidence that China was seriously considering the concerns being raised by both AI researchers and Chinese international relations scholars.
The truth, unfortunately, is that—despite the United States’ efforts at transparency and requests for dialogue—the United States knows very little about how seriously the Chinese military considers ethics in its use of AI, how robust Chinese test and evaluation processes are, and what governance structures and procedures exist to reduce the risk of military AI accidents.That secrecy in and of itself is a source of risk to international peace and security.
But, then again, what incentive does China have to substantively engage? The United States is already providing a great deal of transparency around its own risk reduction efforts, and China is already garnering many reputational benefits from calling for dialogue without any of the costs of substantively participating.
Perhaps neither the U.S. government nor the Chinese scholarly community can succeed in persuading the PLA that it is in everyone’s best interest for this dialogue to occur. At the very least, however, it should be clear to the international community that China is the one refusing to talk.
Gregory C. Allen is the director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2022 by the Center for Strategic and International Studies. All rights reserved.