Regulating Facial Recognition Technology
The debate over facial recognition has been caught up in and shaped by the larger public debate over policing and race. This created powerful emotional concerns over the potential use of the new technology. However, some of the criticisms of facial recognition were based on a misunderstanding over the difference between facial recognition technology, which says that one picture looks like another, and facial characterization, which assigns attributes to individuals (such as gender or race) based on an image. The confusion helped build a narrative that exaggerates the risk of using facial recognition technology.
Two other misunderstandings need to be considered. The first is the need to consider the rate of improvement in digital technology. Like other digital technologies, facial recognition technology is improving rapidly. A 2021 report by the National Institute of Standards and Technology (NIST) found that accuracy had improved dramatically and that more accurate systems were less likely to make errors based on race or gender. Some critiques from even a few years ago are already outdated, as accuracy has improved. This kind of improvement is the norm for any new technology, where development is an iterative process, and each generation gets better than its predecessor. The rate of improvement in facial recognition should allay concerns about the probability of misidentification.
The output of facial recognition software essentially says that the face in one image is the same face in another image to which it is being compared. It makes no assertion about attributes associated with the person in the image. How probable a match is depends on deployment considerations and the generation of the technology. As facial recognition technology improves, the probability of accurate identification increases. The latest generation of facial recognition technology has a very high degree of accuracy, but deployment in the field can be less accurate. Many factors—lighting, movement, camera angle—affect this. Facial recognition technology regulations will need to account for how the technology is deployed. While future facial recognition systems will be even more accurate, they will still need to be accompanied by rules designed for different deployment scenarios.
The debate over facial recognition technology also reflects the very weak state of privacy protections in the United States. Digital technologies create reams of data on individuals, including imagery, and there are relatively few constraints on how this data can be used, particularly for commercial use. Government use of data is more closely regulated, but these regulations often do not apply to new technologies. This is a long-standing tension that goes back to the days of copper-wire telephones.
Adopting or creating constraints on government use of facial recognition technology should parallel the development of constraints on the government use of communications data. The Constitution forbids unreasonable searches, put permits searches that are reasonable, for example, subject to laws and oversight. Creating these rules and oversight for facial recognition is a necessary task and should be approached in the same way, balancing privacy concerns with public safety. Facial recognition technology is another example of law and policy needing to catch up to technology if we are to safely reap its full benefit.
This need to modernize protections has led a number of state and local jurisdictions to create requirements for the use of facial recognition technology. These fall into three categories: restrictions on law enforcement use, restrictions on other government use, and restrictions on commercial use. The most draconian measure is a complete ban on facial recognition technology, a solution that can be appealing in its simplicity but can actually do more harm than good for public safety and efficient delivery of services. For example, there would be no car accidents if we banned cars, but this is not a sensible solution. Similarly, driving cars in an environment without any traffic rules would be dangerous. Local bans will not stop the development of the technology or its wider use. The task for public policy is to define the rules needed to manage risk.
One way to think of this is to consider the general revulsion over the events at the Capitol on January 6, 2021. Without facial recognition technology, many of those who attempted to stop the electoral certification would have escaped without penalty. A lesson from January 6 is that facial recognition technology can be an irreplaceable tool for maintaining public order. That lesson is buttressed by other potential consumer and citizen benefits—most people would choose to wait 15 seconds at an airport rather than 30 minutes at a TSA entry point or an immigration booth on return.
This points to a hierarchy for regulation:
- Strict controls on use by law enforcement agencies should be similar to those used for communications data. These should include oversight and prior approval for programs, transparency in use, rules limiting secondary uses of collected data, and requirements for human review and rights for redress.
- Rules governing government uses other than law enforcement should be less restrictive. These should also include transparency and oversight, defining acceptable secondary uses, and providing processes for redress.
- Rules for commercial use should be linked to improved privacy protections. Rules for commercial use in public spaces may need to be more fulsome than rules for on-premise use.
This is not the first time that the United States has needed to create a regulatory structure for new technologies. There are many precedents, including the laws and rules governing law enforcement access to communications, but applying these precedents requires many decisions on the elements of oversight, transparency, and use to be made. It also ultimately requires progress in creating adequate privacy protection for all data created by digital technologies. Creating these rules will be hard work. A longer CSIS report will examine both state and local legislation (proposed and in effect) on facial recognition to lay out the contours of what has been done and what still needs to be done.
James Andrew Lewis is senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2021 by the Center for Strategic and International Studies. All rights reserved.