The Right to Be Left Alone: Privacy in a Rapidly Changing World
Article 12 of the Universal Declaration of Human Rights (UDHR) states, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation.” While there is no single definition of privacy, it stems from the basic idea that individuals should be able to exercise autonomy and control over their images, experiences, and personal details. Privacy allows individuals to explore their intellectual interests and develop beliefs free from external interference or unwanted attention. As Samuel Warren and Louis Brandeis explained in their famous 1890 Harvard Law Review article, privacy is the general right “to be let alone.”
Privacy strengthens other core components of the UDHR, including Article 19 and 20: expression, assembly, and association. In both democratic and authoritarian nations, overbroad surveillance of peaceful protests and online communications channels could deter free expression. In some cases, surveillance enables direct censorship: China, for example, has tracked critics on popular social media platforms, facilitating harassment and even incarceration. In others, surveillance could lead to chilling effects and self-censorship—if people fear involuntary exposure, they may be more likely to refrain from speaking. After Edward Snowden leaked information about the National Security Agency’s PRISM program in 2013, Wikipedia and Facebook experienced a decrease in user discourse or searches related to certain national security topics. PEN America surveyed over 500 writers in the wake of Snowden’s revelations, finding that 40 percent had either actively or considered avoiding posting on social media and 27 percent had either actively or considered stopping writing and speaking about certain issues.
Privacy is also intertwined with Article 2, 7, and 18 rights to equity and nondiscrimination. Throughout history, historically marginalized communities have experienced outsized effects of surveillance based on factors like their religion and race. For years after 9/11, the NYPD and CIA singled out Muslim neighborhoods and mosques for surveillance. As recently as 2020, U.S. military contractors collected individuals’ precise geolocation history from prayer apps like Muslim Pro and Salaat First. The same year, the Department of Homeland Security surveyed Black Lives Matter activists with drones and helicopters in at least 15 cities and the LA Police Department scanned millions of Twitter posts for words like “lives matter” and “protest.” In the absence of privacy, surveillance can disproportionately target individuals based on demographic attributes or related proxy variables in ways that reinforce historical biases.
Unfortunately, the number of truly private spaces has dramatically shrunk since the adoption of the UDHR in 1948. Due to a widespread proliferation of smartphones, cameras, and mobile apps in the past 75 years, commercial data collection is now ubiquitous. Artificial intelligence has accelerated this trend by creating an enormous demand for sensitive data to train algorithms. Many digital services profit from aggregating personal information—browsing history, geolocation, financial transactions, biometrics, and more—and sharing them with advertisers and other entities. When combining datasets from multiple sources, technology companies can predict sensitive details like an individual’s political affiliation, health, or familial status. Because data collection is so engrained in the modern digital economy, most individuals have few options other than to surrender personal information in order to participate in everyday activities like school, work, and social events.
As data collection expands in the private sector, it simultaneously bolsters government access to that information. Often, governments will conduct surveillance with the stated objective to benefit the public good—including to prevent cyberattacks, fraud, and acts of violence. But more surveillance is not inherently safer; while technology allows governments to easily collect data points on millions of people, the vast majority pose no public safety risks. And the more parties that have access to a dataset, the higher the risks of unintended consequences such as blackmail, doxing, or harassment. To remain consistent with human rights values, governments need to tailor surveillance to what is necessary to achieve legitimate public interests without unduly infringing on the privacy rights of the broader population. However, there is no universal consensus on where that balance should lie—leading to ongoing policy debates over the United Kingdom’s Online Safety Act, the United States’ Section 702 of the Foreign Intelligence Surveillance Act, government acquisitions of commercially available information, and more.
As technology continues to transform society, the global community will face more difficult questions over the application of traditional privacy standards to the new digital realm. As one example, for over half a century U.S. courts have distinguished between public areas (like sidewalks and roads) and private areas (like the interior of one’s home) to determine whether individuals maintain a reasonable “expectation of privacy.” But does that traditional line still make sense in a high-tech world where police officers can place facial recognition cameras on poles outside citizens’ homes, fly drones over sidewalks and roads, and automatically scrape public-facing websites? International organizations like the United Nations and the Organization of Economic Cooperation and Development provide valuable forums for multilateral dialogues to answer questions like these. The world is changing, but the need for privacy and human rights in a democratic society remains constant.
Caitlin Chin-Rothmann is a fellow with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.