Four Lessons from Historical Tech Regulation to Aid AI Policymaking

Since Senate Majority Leader Chuck Schumer announced the SAFE Innovation Framework for AI legislation in June, there has been a steady advance in U.S. artificial intelligence (AI) policy. Federal regulators have declared that automation is not an excuse to run afoul of existing rules. They are not waiting for new legislation—they are instead pursuing logical extensions of existing authorities to form a patchwork of new rules relevant to AI. State governments are considering new bills that would regulate AI training—such as data protection and opt-out requirements—and use cases, such as profiling and facial recognition. As federal rulemaking and Congress’ AI Insight Forums proceed, it is worth considering how the experience of regulation in the early internet and social media eras could inform that work.

There are four lessons that could transcend the new AI era of technology policy, cutting across the regulation of many different sectors.

Lesson 1: The speech disputes of the social media era are unresolved, and content regulation is likely to shatter any bipartisan consensus on AI policy.

Social media regulation previously was not partisan. Social media companies enjoyed broad bipartisan support in their early years as innovators and job creators of the new economy. In 2015, 74 percent of Democrats and 72 percent of Republicans believed technology companies had a positive effect on the country. The 2016 election was a major inflection point, after which liberal and conservative opinions on regulation started to diverge. In a 2018 survey, a majority of Republicans (64 percent) believed social media platforms supported liberal views over conservatives. The points of no return are Twitter and Facebook’s 2021 decision to ban Donald Trump, and the proliferation of vaccine misinformation, which prompted the development of a rigid dichotomy across the parties. Democrats’ interests in social media regulation now revolve around amending Section 230 to remove certain platform legal immunities related to advertising, civil rights, harassment, wrongful death, and human rights. Another effort from Senator Amy Klobuchar, introduced in 2021 but since stalled, would carve out an exception to Section 230 platform immunities for health misinformation. Those measures would have the likely effect of broader content moderation. Republicans concerned about social media companies’ liberal bias would like to reform Section 230 in the opposite fashion, limiting or even removing liability protections that could put content moderation in direct conflict with the First Amendment and create a whole host of other intractable problems.

Polling suggests the public’s views are less nakedly partisan than Congress, but are rife with contradictions. There is broad agreement that policymakers should do something on social media regulation. According to Gallup and the Knight Foundation, only 3 percent of social media users agree with the statement “I trust the information I see on social media,” and 53 percent of social media users believe it has a negative impact on others like them, and 71 percent feel the internet divides the country

Lesson 2: U.S. antitrust authorities are already empowered to promote competition in the AI market but may be reticent to exceed precedents established for consumer internet platform companies.

U.S. antitrust authorities have a robust non-price term framework, backed by case law and statutory law, to enforce competition actions against ICT platform companies offering “free” products in nebulous markets. Section 7 of the Clayton Antitrust Act determines antitrust authorities only need to prove a merger may substantially lessen competition—a lower threshold than demonstrating actual anti-competitive outcomes. In response to the Microsoft antitrust case in the late 1990s and early 2000s, former Federal Trade Commission (FTC) commissioner Orson Swindle testified that antitrust prosecution rests on the successful proof that a monopolist has abused market dominance to harm “consumer welfare in the form of higher prices, reduced output, and decreased innovation.” Regulators successfully applied that formula in ICT antitrust actions against AT&T and Microsoft. The 2023 updated Merger Guidelines from the Department of Justice and FTC state that non-price indicators may be useful in ‘free’ product markets. Antitrust authorities can use non-price data to argue antitrust targets understood how their actions would impact competition. A/B testing, internal firm experiments that test two different outcomes to determine the best course of action, is a common non-price data point in antitrust cases. The FTC referred to evidence from A/B testing in its successful 2023 lawsuit against Credit Karma, in which the latter was fined $3 million for misrepresenting the likelihood a customer would be approved for credit.

However, the merger guidelines’ non-price indicator framework is difficult to reconcile with recent unsuccessful antitrust actions against big tech. In February 2023, the FTC lost an antitrust lawsuit against Meta’s acquisition of Within Unlimited (a virtual reality startup), with U.S. district judge Edward Davila ruling the FTC failed to prove the acquisition would harm competition. Five months later, the FTC lost a request for an injunction against Microsoft’s acquisition of Activision Blizzard on similar grounds. An case on abuse of market power against Amazon is looming in California, but considering a Washington, D.C. judge threw out the same case, it is possible the FTC may suffer yet another loss. The Microsoft antitrust case of two decades ago may be viewed retroactively as a useful precedent, but was actually a long, drawn-out process that was far from an unequivocal success for the prosecution. The conclusion is that while antitrust authorities are confident the status quo will endure with AI, Congress must decide whether it is satisfied with antitrust regulators’ mixed record, or amend laws to make it easier for them to win in court.

Lesson 3: Regulation of AI harms will encounter the same unresolved legal gaps in liability as earlier generations of software.

Victims of negligence and physical harms currently limited legal options, remedy this through clarifying software reasonable care standards. Software liability determines the risks that software producers assume for product misuse, vulnerabilities, and failures that result in physical harm arising from the use of their product. Innovation has outpaced regulation; in many instances, rules have not been updated since their initial promulgation during the internet industry’s infancy. Software liability remains poorly defined in legal proceedings and federal law.

Rather than legislation, executive rulemaking, and landmark judicial precedents, out-of-court settlements, and contract law form the legal basis for many software liability disputes through end-user licensing agreements (EULAs) and private contracts. EULAs are a common precondition for software use, including defining terms and assigning risk, which determines the basis of most software disputes. The Society of Computers and Law launched an alternative dispute resolution forum for software disputes based on contract law in 2019.

EULAs, private contracts, and out-of-court settlements do not cover key issues directly related to software liability, such as physical damages and negligence. Prominent legal scholars Michael Rustad and Thomas Koenig argue that “to date, no court has held that a software engineer’s failure to develop reasonably secure software constituted professional negligence.” In the first U.S. case of an autonomous vehicle killing a pedestrian, prosecutors did not charge Uber, the firm responsible for the car, despite the algorithm’s failure to classify the victim as a pedestrian crossing the street. Instead, prosecutors pursued charges against the driver in the self-driving vehicle based on her negligence behind the wheel, while also accusing Uber of failing to establish a safety culture or oversee its drivers.

Lesson 4: Good faith efforts of leading AI firms should not preclude the development of potentially binding requirements to align incentives on sharing risk information.

In many cases, firms have legal and market incentives to withhold security threat information with regulators or competitors, which could lead to government punishments or reputational damage. According to a 2020 Inspector General report, the Cybersecurity and Infrastructure Security Agency’s Automated Information Sharing initiative appears, on the surface, to be a great success, with a significant number of businesses signing up to the program. However, few of those companies had actually shared any useful intelligence on cyber threats.

Effective structures for incident reporting exist outside of the technology sector and may be a better precedent for AI incident disclosures. In the aviation industry, the Mitre-led Aviation Safety Information Analysis and Sharing (ASIAS) system significantly improved commercial aviation safety practices and set new international risk management standards. ASIAS is a public-private partnership that convenes manufacturers, operators, and regulators to circulate anonymous incident reports. Private sector firms share information with Mitre, a federally funded research and development center, rather than directly with their competitors or regulators, helping to mitigate the fear of reputational risk or legal liability from incidents. A similar approach has been effective in other areas, including the Healthcare Fraud Prevention Partnership and National Patient Safety Partnership.

As federal and state governments contemplate new AI policy, there are useful lessons to learn from previous experiences in regulating the early internet and social media. It is crucial to recognize that the nascent field of AI presents its own unique challenges, alongside opportunities for novel approaches to governance. Isolating the principles of effective tech regulation from the past and applying them to AI may seem like a daunting task. However, it may, in fact, help to ease the burden on policymakers who are feeling the weight of their constituents’ expectations to strike a balance between AI harms and rewards. There are as many failures as successes from previous technology regulation, but both can prove insightful in the present moment.

Michael Frank is a senior fellow in the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies in Washington, D.C.

Image
Michael Frank

Michael Frank

Former Senior Fellow, Wadhwani Center for AI and Advanced Technologies