A Real Risk for Artificial Intelligence

Remote Visualization

Artificial intelligence (AI) is not really new. The technology has been around since the 1950s and AI tools are already essential for business. Manufacturing, finance, and research companies that interact with consumers all use AI to manage customers and boost productivity. AI tools will reshape economies, but this will be an iterative process as innovators find better ways to use AI to gain a competitive advantage, similar to the way it took years to reap the benefits of the internet. AI is a tool that will make other tools more productive.

The first wave of excitement over AI peaked in the 1980s. Now, “generative AI,” or GenAI, has triggered the latest round of attention. ChatGPT is a well-known example of generative AI. It is trained on huge databases and can produce text, software, and video without human intervention. AI’s reliance on scraping the web for data challenges existing laws regarding intellectual property, and some worry that there may not be enough new data left to scrape. Generative AI can seem human, since it can interact directly with people, but this is an illusion. Meta’s chief AI scientist even said that AI is not as smart as a cat and can’t even figure out how to load a dishwasher.

Paradoxically, AI generates both exuberance and gloom. There is an AI financial bubble (although it may be about to pop). Investments in AI technologies have quadrupled in the last year, but this has been accompanied by an anxious narrative.

One common fear is that AI will lead to mass unemployment. This is wrong. That automation will cause some jobs to disappear is a concern that goes back to the first days of industrialization in the early nineteenth century. English workers attacked machines and factories that they believed threatened their jobs. People tend to confuse the risk of automation with a failure of social policy—an inability to equitably distribute the results of greater productivity. Many know that Keynes wrote in 1930 that automation creates more jobs than it costs. Decades later, he is still right. Jobs will change, but more new jobs will be created, and with them, more wealth and leisure. 

Another fear is that AI will “awaken,” become sentient, and take over the world, perhaps destroying all human life in the process. This has been the plot of many films over the last century, and one should regard these fears with a high degree of skepticism. Elon Musk said that AI is “one of the biggest risks to the future of civilization,” while physicist Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” The Center for AI Safety compared the risks of AI to those of pandemics or nuclear war. In fact, most researchers are divided on whether AI will ever have the independent capability for the planning, subterfuge, and independent thought needed to become an existential threat.

There are similar fears about “killer robots,” autonomous devices that can take independent action in combat without human supervision. Progress in the development of autonomous devices will naturally lend itself to military applications and make weapons more precise and agile. Air defense systems are examples of automated weapons developed due to humans being too slow to react to an incoming hypersonic missile. Weapons will be automated, but killer robots do not exist and may never exist. They face the same problems as self-driving cars—an inability to deal with the complexity of physical environments. 

There are already malicious uses for AI, although none are lethal. AI is used to amplify the effect of “fake news,” adding to the political turmoil the internet created. Societies are already seeing AI-generated “deepfakes” of world leaders saying things they never really said. President Biden joked about this in a speech about the new U.S. executive order on AI, which cautions about the possible misuse of AI to develop new bioweapons. But these fears reflect some deeper social anxieties and resemble the fears about witchcraft in the seventeenth century that were widely endorsed by many elites and experts.

Improbable scenarios reflect misconceptions about warfare. The relationship between technological advantage and military success is complex, and other factors are more important than technology. Similar concerns about bias and social effects reflect beliefs in the existence of systemic patterns of behavior and bias that remain a subject of debate. The unspoken assumption in most cases is that societies will build tools and lose control of them.

Endorsement by elites does not guarantee veracity. The worthies of the 1600s were persuaded by flimsy evidence that old people had carnal relations with Satan and communicated with him using cats. One possible explanation is that periods of rapid, extensive, and apparently unmanageable change create anxiety, and some external phenomena like witchcraft or AI provide an outlet for this. While this explanation is unsatisfactory, fears about AI safety, bias, or dominance are in fact cousins to the conspiracy theory about vaccines, just more socially acceptable. Why democratic societies have become more risk averse in the last thirty years is a longer discussion, but risk-averse societies are prone to perceiving danger where there is none.

The discussion around AI has centered on the need for ethical guidelines to ensure that it is used in a principled manner. The United Kingdom hosted an AI Safety Summit. The European Union passed an AI act to regulate artificial intelligence (prompting even French president Macron to complain of overreach and risk to innovation), and the Organisation for Economic Co-operation and Development issued guidelines on managing the risks of AI. All of these guidelines recognize the need to balance any potential risk of AI systems against the risk of losing the economic and social benefits the new technology will bring.

However, a focus on potential dangers and defining ethical use without the benefit of experience could inevitably harm AI’s potential for economic growth and the benefits to research and innovation it will bring. The trend is to put in place restrictive rules for AI based on predictions of potential harm, not actual harm. Many governments and researchers underestimate how easy it is to choke innovation, which requires an unusual willingness to take risks. By erecting obstacles and creating unreasonable compliance burdens (unreasonable when they address unobserved or hypothetical fears), governments will decrease the amount of new products and services AI could help create and deter companies, especially small, new companies, from entering the field. Pledges by decisionmakers to respect the need for innovation do not compensate for regulatory risk. 

Therein lies the dilemma. Since the 1990s, EU regulations for digital technologies have harmed Europe’s economic growth by creating bureaucratic obstacles for innovators and investors. The GDP of countries like France and Germany grew at a quarter of the rate of the United States’ and China’s because its regulations slowed economic digitization. This is why the European Union lacks both tech giants and unicorns. The new rules for AI could lock Europe out of further growth and innovation, and if they are copied, the United States and other countries in the rest of the world could join it in losing the benefits.

The motives behind these anxious predictions are open to question. It could be a desire for public attention by making extreme statements. There could be commercial motivations, and there is speculation that big companies that currently lead in AI want to use regulation to put potential competitors at a disadvantage. AI also comes at a time of anxiety over the worsening state of international affairs. There is a broad and changed perspective on technological progress. The internet, the last big technological change, came at a time of rosy millennial and utopian predictions about how it would transform the world into a place where war had ended and all countries were on the path to becoming market democracies. This was wishful thinking, and while the predictions for AI are gloomier, they are no more accurate. 

A more accurate conclusion is that humans are bad at predicting technological futures. Predictions that automation would cost jobs date back two centuries to the start of the industrial era, predictions of killer robots and electronic brains dominating human life to the 1920s, and after 1945, there were predictions of computer intelligence creating existential threats. Nuclear weapons unleashed a torrent of predictions about the end of human life, which were emotionally satisfying but statistically wrong. Why societies are impelled to make pronouncements of doom and why they are wrong requires a separate analysis. It may be that these societal fears reflect an ability to observe, but not control, powerful and seemingly impersonal trends created by technology.

AI is just another step in the process of replacing human labor with machines that began more than two centuries ago. Automation continues to improve standards of living and quality of life. Without dismissing these concerns, since they reflect deeper social fears, emphasizing risk misses the point. Societies that run away from technology will forfeit the chance for improvement. The challenges of AI for policymakers are finding how best to use it to accelerate innovation and productivity, and how to equitably deliver the improvements in the standard of living that are the result to all citizens. This is the AI policy problem that needs to be solved.

James A. Lewis is the Pritzker Chair and director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.

James Andrew Lewis
Senior Vice President; Pritzker Chair; and Director, Strategic Technologies Program