AI and Rumors of Impending Doom

"If 50 million people say a foolish thing, it is still a foolish thing."

— Anatole France

Another week, another story about how artificial intelligence (AI) is an existential threat to the human race. This quote, widely attributed to French writer Anatole France, captures this conundrum well. This is nonsense with no basis in fact, so why do people say it? AI is the latest in a series of panics, all of which have proven to be wrong. In 1964, the New Republic wrote that the world had entered the age of famine. Instead, we have entered the age of obesity. Others wrote in the early 1970s that the population of the earth was unsustainable at 4 billion people. The current population is 8 billion. One pundit stated in 2019 that the next pandemic would “kill tens of millions of people in a short time.” Covid-19, while a horrible tragedy, killed 7 million people over three years. The population bomb, peak oil, energy crisis, extinction rebellions, zombie apocalypse—there is a long list of imminent catastrophes, none of which ever occurred and most of which have been discreetly forgotten, yet the appeal of such stories remains strong (An excellent account of misplaced fears can be found in Paul Sabin’s book The Bet.).

It is discouraging to see so many pronouncements of AI’s existential threat to humanity by people who should know better. The question is not whether they are right, since they are not, but why make these charges at all? One possible reason for this comes from the one technology that poses existential risk and whose creation undermined the enlightenment narrative of progress in science and technology improving human life. Global nuclear war is the progenitor of today’s imaginary catastrophes, since it alone posed real existential threat. A few decades ago, a war using thousands of powerful weapons would have killed hundreds of millions of people and destroyed entire economies. Once these weapons were placed atop long-range missiles whose flight time to target and detonation was less than 40 minutes, societies faced existential risk.

Nuclear weapons are unique in being the only technology ever built that has the potential for existential catastrophe. This risk has been reduced by dismantling nuclear arsenals, since many hundreds of nuclear bombs are required to produce the effect, but nuclear war is the template for technological progress leading to the fears of catastrophe and extinction that still shapes popular culture. A long series of movies produced after the explosions of 1945—beginning with The Day the Earth Stood Still, On the Beach, Fail Safe, Dr. Strangelove, even Godzilla—entertained us with apocalyptical tales and created a narrative whose theme (humanity is at risk from uncontrolled science) continues to shape thinking in unhelpful ways.

Nuclear war is one ancestor of the clamor over the risks of AI. Another likely ancestor is a loss of faith in the ability of democratic societies to manage themselves. In the United States, a series of events, including the surprise attacks of 9/11, the overconfident decision to admit China to the World Trade Organization, the self-induced 2008 global financial panic, and defeat in various Middle Eastern wars, all accompanied by a host of seemingly intractable domestic issues, undercut the belief that change can be managed. Europe has a similar list of woes dating back more than 100 years. The perception of failure does not inspire confidence and undercuts the legitimacy of leaders and institutions.

Other factors contribute to hysteria over AI. The internet has made discourse chaotic. An undercurrent of rumor has always existed in society, reflecting the anxieties of the time, but the internet puts rumor and conspiracy before a giant audience. Its openness has diluted any requirement for expertise. Competition for attention in an intensely commercial society inclines people to tell horror stories—reflecting the inherent bias in human cognition, where a scary story commands larger audiences than a happy one. Companies and assorted commentators have discovered that they can tap into this penchant for apocalypse and use it to gain attention and audience share in what has become a self-reinforcing cycle of warnings.

If predictions of doom were only for entertainment and PR purposes, they would not be a problem, but exaggerated fear can lead to bad policy. At best, the dozens of norms created for responsible AI are unnecessary when not fatuous. At worst, particularly if they are translated into law and regulation, these norms will stifle economies, harming the less well-off more than elites. The best example of this are the fears about AI-created unemployment. That automation will cause jobs to disappear is a fear that goes back to the early nineteenth century and the Luddites. Keynes wrote about job loss and automation in 1930, noting that the fears that automation will destroy jobs were proven to be wrong in every instance. AI is the latest phase in the automation of human activity that began in the eighteenth century, and automation creates wealth and innovation. Some jobs disappear; more jobs are created. With these new jobs will come increased wealth and leisure.

It would be better to reinterpret the challenge of AI as deciding how to allocate increased wealth and leisure, but income distribution has not been a shining success for social policy for the last 30 years. Determining how to distribute wealth and leisure equitably and responsibly will be difficult, easing the transition from old jobs to new is challenging, and finding ways to minimize the turmoil that new technologies bring without sacrificing their potential to increase human welfare are the real problems brought by AI—and talking about apocalypse does not help solve them.

James A. Lewis is senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.    

Image
James Andrew Lewis
Senior Adviser (Non-resident), Economic Security and Technology Department