Modernizing U.S. Commercial Privacy Standards for a Digital Economy

Earlier this month, the chairs of the Senate Commerce, Science, and Transportation Committee and House Energy and Commerce Committee released a discussion draft of an American Privacy Rights Act (APRA) that would set basic boundaries on how most companies in the United States collect, process, retain, and transfer personal information. The discussion draft includes general data minimization standards that would require companies to only process personal information if necessary to provide a requested service, with some exceptions to allow activities like market research and product improvement. In addition, the discussion draft provides rights for individuals to request to access, correct, and delete their personal data that companies hold—consistent with widely accepted fair information practice principles also rooted in a 1973 Department of Health, Education, and Welfare Advisory Committee report and the 1981 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.

As AI development and deployment continue to drive massive demand for sensitive personal information, federal privacy legislation can provide a necessary baseline on data collection and transfers. The draft ARPA would require large companies to regularly assess algorithms that present a “consequential risk of a harm” for possible disparate impacts based on protected characteristics and to provide those results to the Federal Trade Commission (FTC) and Congress upon request. In addition, it grants individuals the right to opt out of targeted advertising. It also extends this opt-out right to algorithmic decisionmaking in “consequential” contexts like housing, employment, education enrollment, and public accommodations. This dual approach—holding businesses responsible for demonstrating algorithmic fairness and enabling user opt-outs—could provide stronger and more consistent protections on algorithmic profiling across all 50 states.

Although dozens of bills were introduced in the two previous Congresses that shared similar elements of data minimization, individual rights, and algorithmic impact assessments, this latest discussion draft is notable for its bipartisan, bicameral compromise on two major issues: preemption and a private right of action. Historically, some lawmakers opposed preempting laws like the California Consumer Privacy Act (CCPA), arguing that state legislators must retain the ability to pass laws stronger than a federal standard. The draft ARPA proposes overriding any state statutes that directly conflict with the federal privacy standard, which would presumably include the CCPA, but preserving other areas of state law like consumer protection and civil rights. Moreover, the APRA proposes enabling individuals to seek legal recourse against companies that breach specific provisions of the act, but first allowing companies a 30-day window to cure the alleged violation and limiting the amount of damages recoverable.

However, the efficacy of the APRA’s data minimization provisions could depend on how companies and courts interpret the boundaries of 15 “permitted purposes,” including market research and product development, in practice. Companies might analyze data points including purchasing behavior, demographics, and location to predict future consumer trends, sometimes without the awareness of the related individuals. For example, the fast fashion company Shein analyzes past consumer behavior to predict future demand for products, which allows it to quickly adjust manufacturing decisions throughout its supply chain. While these exemptions could enable companies to preserve their existing business models or continue to innovate their offerings, it is not yet clear whether they would sufficiently protect individual privacy during market behavioral analysis that is not directly related to targeted advertising.

In addition, the ARPA would not apply to private entities that analyze data “on behalf of” or act “as a service provider to [a] government entity.” Presumably, this provision could exempt a range of data brokers and other contractors that exclusively cater to government clients. In recent years, law enforcement and intelligence agencies have turned to data brokers and other contractors to acquire cellphone geolocation data, scan social media posts for key words and phrases, and identify individuals through facial recognition. The expansion of data collection in the private sector has enlarged the possible scope of government surveillance, raising new questions around the legal doctrine that individuals lack privacy expectations in data voluntarily shared with third parties. Congress could provide clarity on these legal questions through both limiting how all commercial surveillance vendors operate—no matter who their clients are—and enacting separate processes that government agencies must follow to procure data from private companies.

The Need to Modernize Privacy Laws

An overarching goal of proposed comprehensive privacy bills and discussion drafts, including the ARPA, is to modernize the current patchwork of federal and state laws that contain different privacy standards for companies depending on their relationship with the user, geographic location, type of device, and sector. For example, while schools and universities must comply with the Family Educational Rights and Privacy Act when handling records related to their own students, the same law does not apply to educational mobile apps or data brokers that target online advertisements to prospective candidates. At the federal level, the U.S. privacy framework has not undergone comprehensive updates since the internet became popular and thus contains significant regulatory gaps that permit digital platforms to engage in excessive data collection, storage, and transfers for both beneficial and harmful purposes.

Privacy violations pose well-documented risks to civil rights in the United States, especially if companies process sensitive demographic information or related proxy variables in ways that reinforce existing societal biases. For example, U.S. data brokers have analyzed law enforcement records to predict future patterns of crime—but since historical policing patterns have disproportionately targeted individuals based on factors such as race or religion, algorithms can replicate such biases in their results. In October 2022, the White House published a Blueprint for an AI Bill of Rights that recognized the potential for automated decisionmaking to contribute to broader discrimination or exclusion and called for AI developers and deployers to implement default safeguards and user controls to conform data usage to reasonable expectations of privacy. In January 2023, the National Telecommunications and Information Administration issued a request for comment on the effects of commercial data collection on historically marginalized communities. In December 2023, the FTC reached a settlement agreement with Rite Aid for deploying facial recognition technologies to deter theft despite higher error rates for Black and Asian customers. These recent initiatives to promote AI equity build upon Obama-era efforts, including 2016 reports from the White House and FTC that recognized the risks of perpetuating systemic discrimination through big data analysis.

U.S. privacy law is not purely a domestic issue; the data practices of U.S. public and private entities carry global ramifications for cross-border data flows and digital trade, alignment with international human rights frameworks, and national security risks of foreign adversaries accessing sensitive data on the open market. Over 100 countries have updated their data privacy frameworks for the digital age, which leaves the United States as an outlier among its global political and economic partners. In particular, the current fragmented U.S. data privacy framework contrasts with the European Union, which enshrined data protection into the Charter of Fundamental Rights and enacted the General Data Protection Regulation (GDPR). Because the European Commission has not issued a determination that U.S. data protection laws are “adequate” under Article 45 of the GDPR, the legality of cross-border data transfers between the two jurisdictions has been subject to a significant amount of uncertainty. The Court of Justice of the European Union invalidated two transatlantic data flow frameworks, Safe Harbor and Privacy Shield, in 2015 and 2020, and France banned Netflix, Instagram, and Twitter on government devices in 2023. As a result, in addition to harming the U.S. global reputation and institutional trust, regulatory misalignment also creates uncertainty for technology companies—especially small businesses and start-ups—that rely on international data flows.

How Congress Can Strengthen Executive Action

Although the Biden administration has announced multiple executive actions to promote global privacy in the past two years, their efficacy is limited absent major legislative changes to regulate how all domestic private companies collect and share personal information. For example, in October 2022, the Biden administration issued Executive Order 14086, or the EU-U.S. Data Privacy Framework, which limited U.S. signals intelligence collection to what is “necessary” and “proportionate” to advance 12 legitimate objectives. The EU-U.S. Data Privacy Framework also established a “redress” mechanism to allow qualifying foreign nationals to request review of potential surveillance by U.S. intelligence agencies and created a new Data Protection Review Court within the Department of Justice. Although EO 14086 attempts to mitigate concerns around U.S. government surveillance that formed the basis of the EU Court of Justice’s 2015 and 2020 decisions to invalidate the Safe Harbor and Privacy Shield frameworks, comprehensive commercial privacy legislation could support the long-term sustainability of the new framework or even a future adequacy decision under the GDPR.

Most recently, in March 2024, the administration issued Executive Order 14117 to prevent U.S. entities from exporting certain categories of bulk sensitive datasets to designated high-risk foreign governments. However, it is difficult to prevent U.S. technology companies from transferring sensitive personal information to foreign adversaries abroad without enacting stronger legal boundaries on how they collect and store such data in the first place. The White House acknowledged this reality when announcing the executive order, stating: “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children.” The open market for commercial data is expansive and leaky, and personal information can change hands numerous times without notification to the public or affected individuals. As long as U.S. technology companies can legally collect and transfer sensitive personal information for an almost unlimited number of purposes, prohibited foreign actors could access it through a myriad of channels including front companies, hacking, or re-exports through third countries.

Ongoing Reassessment of Privacy Legislation

Although private companies need a uniform federal privacy standard, any forthcoming legislation should act as a baseline and not a ceiling. Congress should continue to consider open questions such as (a) what procedural guardrails U.S. government agencies should follow to procure data from private companies through both voluntary and compelled means, (b) how to ensure minimum privacy and security standards for small businesses while also avoiding creating unrealistic compliance costs that could act as a barrier to entry to compete, and (c) what boundaries, if any, both private and public sector entities must comply with to process public-facing data, while also maintaining open data flows and speech. Any modernization to federal privacy law now should not be the last, and Congress should continue to regularly reexamine standards to ensure their robustness and completeness as technology and data analytics continue to evolve.

Caitlin Chin-Rothmann is a fellow with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.

Image
Caitlin Chin

Caitlin Chin-Rothmann

Former Fellow, Strategic Technologies Program