By Jeff Berkowitz
Gone are the days of political buttons, guessing about voter preferences, and the mass distribution of pamphlets about the positions of candidates for the highest offices in the country. The emergence of artificial intelligence (AI), machine learning (ML), and big data have fundamentally changed how politicians engage the American electorate and will continue to challenge centuries of political and intrapersonal norms surrounding voter enfranchisement. Leveraging the expanding quantity and diversity of information about voters floating through cyberspace, politicians, governments, and social groups have more tools at their disposal than ever to push agendas and candidates. Leaders must take a critical look at the benefits, drawbacks and possible risks associated with the mass deployment of emerging technology we are experiencing in politics to ensure effective policy making, and ultimately protect democracy from potentially dangerous influences.
The Presidential election of 2008 saw social media emerge as a central platform for political conversation, dissent, and strategic marketing.
For the first time, “more than half the voting-age population used the internet to connect to the political process during an election cycle.” The
Obama campaign was at the forefront of bringing advanced data analytics and targeted advertising into the political sphere through ML, creating “sophisticated analytic models that personalized social and e-mail messaging using data generated by social-media activity.” This commitment to data analytics was successful in 2008 and led to the campaign raising the bar in 2012 when they built a team of over
one hundred data analysts who successfully built a massive parallel processing tool to do predictive modeling and guide strategic decision making. Novel in 2008 and 2012, this type of political activity has become the norm, with AI embedded in most campaign data analysis tools, polling intelligence platforms, and political advertising strategies.
There is no doubt that AI and ML in politics has progressed since the days of analytics driven emailing and fundraising in 2008. These tools can be used to analyze patterns to guide targeted advertising, highlight the likelihood of legislation being passed, run bots that detect and
fight fake news and disinformation, and educate voters about candidates and issues. As noted by
Dr. Slava Polanksi, a UX researcher at Google, AI can be leveraged to make sure politicians are listening to what people say to formulate representative policy, and correspondingly facilitate the deployment of micro-targeted campaigns to help voters make informed decisions.
However, there is little doubt that there are also emerging risks associated with the use of AI and ML due to the growing pool of malicious actors eager to influence American democracy. Among other things,
these technologies can be used to spread fake news through bots, psychologically manipulate susceptible voters through targeted emotional appeals, and even lead armies of bots to swarm social media to hide dissent. In fact, in an analysis on the role of technology in political discourse entering the 2020 election,
The Atlantic found that, “about a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote.” While the efficacy of these techniques is hard to track, it stands to reason that someone, somewhere, had their vote or opinion influenced in some way by a political bot. Otherwise campaigns and interest groups wouldn’t spend billions of dollars producing them, with political advertising expecting to top out at around
$10 billion for 2020, much of which is driven by AI.
The shift in American politics towards running campaigns driven by big data and highly specialized, ML based voter analytics tools, shows no sign of slowing down. Guided by President Obama’s initial successful application of big data to his 2008 and 2012 campaigns, the 2016 and 2020 campaigns led by President Trump and Secretary Clinton took the shift to new heights. Indeed, in a post-mortem of Hillary Clinton’s 2016 campaign, the
Washington Post revealed that the campaign was driven almost entirely by a ML algorithm called “Ada”. More specifically, “the algorithm was said to play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads — as well as when it was safe to stay dark.” This case study certainly underscores the risks associated with an overreliance on AI and ML as a guide for human decision making, with some attributing the failure to recognize the relative insecurity of the “Blue wall” in the Midwest to this program. This warning appeared to be heeded by the 2020 campaigns, when both the Trump and Biden campaigns primarily focused on AI and ML in the advertising space, and less so as a high-level strategic guide. Combined, the campaigns spent
$200 million on AI driven Facebook advertising, a far cry from the
$643,000 spent by President Obama’s 2008 campaign on the same medium.
As the United States navigates the murky waters of tech regulation and data privacy, especially in the context of electoral politics, or what
Scientific American has called the “arm’s race to the unconscious mind”, the future remains unclear. Questions continue to bubble out about the ethics of using these technologies and the state, or lack, of regulatory policy. The existence of
partisan divisions, political motivations for candidates, and public disagreement about the ethical principles underpinning AI and ML has prevented regulation from keeping pace with the usage and application of these technologies. We are just scratching the surface of AI and ML, with researchers at
Oxford and Yale believing that there is a 50% chance of AI outperforming humans in all tasks, campaigning included, in just 45 years. Unless the American public collectively decides to go dark in cyberspace, stop shopping online, and delete all social media and chatroom applications, there is little stopping a tech savvy software engineer from discretely planting AI enabled seeds into our everyday lives in any number of ways, with politics being a ripe application. We must continue to have dialogue about protecting data privacy, encourage the creation of standard regulations and ethical codes on AI and ML, develop a legislative awareness of the aforementioned risks, and hold each other civically accountable as conscientious consumers of political information. The race is only beginning.
Jeff Berkowitz is a former research intern with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, DC.
The Technology Policy Blog is produced by the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).