Fear, Democracy, and the Future of Artificial Intelligence
October 11, 2017
“The only thing we have to fear is fear itself.”—Franklin Delano Roosevelt
FDR’s famous words have perhaps never been so true as they are for those of us who are enthusiastic about the potential of artificial intelligence (AI). Last week, the Pew Research Center released the results of a new poll on “Automation in Everyday Life,” which, unsurprisingly, showed that the majority of Americans are more afraid of the transformative potential of AI than excited about it.
On one hand, this result is not a surprise. Major technological disruptions have always been met with fear and suspicion, particularly those that have transformed work, and fears of an AI “jobspocalypse” have been documented for years. The steam engine, automobile, and sewing machine were presented as a menace to the working public at the dawn of the Industrial Revolution, but in reality they created new opportunities and an unprecedented boom in growth and productivity that created the middle class.
The wording of this survey probably did not help either. The survey asked respondents if they are very enthusiastic, somewhat enthusiastic, somewhat worried, or very worried about “a future where robots and computers can do many human jobs,” framing the AI revolution as a battle for work between humans and machines. No surprise, then, that human respondents, faced with a rising tide of robots that don’t need raises, sick days, sleep, or pensions, fear the coming battle. Labor feared the Industrial Revolution too, but it still came, and workers still reaped its benefits. The same is true of computers.
What should concern us is the policy section of the survey. A whopping 85 percent of Americans surveyed supported a policy that “machines are limited to doing dangerous or unhealthy jobs.” Almost half strongly favored this policy! Just 14 percent of Americans oppose government limiting which businesses and sectors can take advantage of the automation revolution.
For those of us who live in democratic societies, that figure should be deeply disturbing. If the predominant view of AI among voters is fear, it makes it difficult for our elected leaders to support the technology and help us reap its benefits. How many members of Congress can resist supporting a policy that has the backing of 85 percent of Americans? How many presidential candidates are likely to stand up for the positive potential of AI with the support of just 14 percent of the electorate?
As much as we might enjoy the spectacle of Congress debating what jobs are “dangerous or unhealthy” enough to be automated, any laws or policies that constrain the development and deployment of these technologies in the United States will have far-reaching consequences. The United States cannot stop the automation revolution alone. Around the world, countries recognize that AI and robotics is the future, and for countries like China or Russia, which have strong central governments that do not have to manage political transitions every few years and are not directly accountable to their people, opposition to AI in the United States represents a huge opportunity. If we put the brakes on the development of AI in our country, they will forge ahead.
And it is not as simple as saying “Fine, let their workers lose their livelihoods!” In our ever-more-global economy, new business models developed overseas will displace Americans from jobs as quickly as those developed next door. If we allow ourselves to fall behind in the development and deployment of AI, the biggest impact on our job market will be that fewer AI-enabled jobs will become available to Americans, and fewer U.S. businesses will benefit from the efficiencies and new capabilities provided by AI.
More importantly, if we really care about freedom, opportunity, equality, and privacy, the single most important thing we can do is to ensure that those technologies are developed in the United States and not in countries like China and Russia. Though consumers are rightly concerned about the huge quantities of data being gathered by Google and Facebook, the risk that those companies will use that knowledge to figure out what kinds of free services and discounted products we don’t even realize we want is surely less sinister than Tencent or Baidu providing that data to the Chinese government to use for espionage. And if these companies are the main beneficiaries of the AI revolution, the new jobs they create will likely not meet U.S. labor standards.
So what can we do? The first thing is to educate the public and push back on unrealistic fears about AI. Humans fear the unknown, but when it comes to AI we often overestimate what we do not know. What does the economic impact of AI look like? As much as we see it as the wave of the future, commercial AI has been around for more than a decade and has transformed our lives in largely positive ways.
A great example is Google Search, one of the most prolific, ubiquitous, and disruptive technologies of the modern era. Search made the Internet accessible to the everyman, and Google’s ability to identify and serve up the search results that we want using machine learning, an early form of AI, led it to dominate the industry. Google’s ability to analyze our online behavior and target advertising using AI allowed it to provide this transformative service for free. And through the Internet, accessed via Google (how many of us don’t have Google as our home page?), the digital economy was born. Everything from the way we shop to the way we interact has been transformed. Most people do not fear the Internet and appreciate its benefits, but they do not recognize the role AI has played in enabling its development.
The second is to accelerate the deployment of AI across more applications and sectors. This strategy has risks—it could provoke a populist backlash against AI, particularly if there is a major accident—but perhaps the best defense against fear of the unknown is to give the average American the chance to become more familiar and comfortable with the technology. It is also harder to roll back the adoption of a technology than to prevent it from starting, and pushing early adoption of AI may make it harder for regulators to disrupt its spread.
Third, we need to recognize and proactively manage the legitimate risks associated with the adoption of AI. Though much of the concern around AI is hype or fear of the unknown, there are real risks around privacy, job displacement, inequality, and bias that need to be addressed. There will be accidents, disruptions, and unintended consequences, and if we do not have a strategy to deal with these challenges in advance, we are likely to fall into the trap of reactionary policymaking that poses the greatest risk to the U.S. AI revolution.
The biggest risk is that the benefits of AI will not be distributed evenly. Like free trade before it, AI promises real economic opportunity, but unlike a rising tide, it may not lift everyone up equally. Free trade has come under fire from both ends of the political spectrum because we failed to manage its short-term disruptive impacts. Policymakers failed to provide job training and retraining, a strong social safety net for people in transition, and freedom of movement and investment that helps to distribute the benefits of economic transformation.
We cannot make the same mistake with AI. When congressmen get angry calls from constituents whose jobs have been displaced by AI, they should be able to point to the programs and institutions they have put in place to help them build new careers and businesses. If opposing the growth of the AI economy is the most politically attractive option, the future of the economy and U.S. leadership will be at risk.
Fear of AI is the biggest risk to our future. Shedding light on the benefits of AI, showing voters its potential, and managing the risks of this transformative technology is the best way to ensure that the United States maintains its global leadership and that humanity can reap the full benefits of the technology of the future.
William A. Carter is a fellow and deputy director of the Technology Policy Program at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2017 by the Center for Strategic and International Studies. All rights reserved.