facial recognition system

Majority of Facial Recognition Systems Are Less Accurate For People of Color, Federal Study Finds

Native Americans had the highest rates of false positives, while African-American women were most likely to be misidentified in a law enforcement database.

A new study, released by the National Institute of Standards and Technology (NIST) on Thursday, finds that a majority of commercial facial recognition systems are less accurate when identifying people of color, particularly African-Americans, Native Americans and Asians.

The federal agency conducted tests on 189 facial recognition algorithms from 99 developers, including systems from Microsoft, Chinese intelligence company Megvii and more. Systems from Amazon, Apple, Facebook and Google were not tested because none of the companies submitted algorithms to the study, according to The New York Times.

Algorithms developed in the U.S. showed high rates of false positives relative to images of white people, with Native Americans having the highest rates of false positives.

“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” Patrick Grother, a NIST computer scientist and the report’s primary author, said in a statement. “While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”

Notably, the study found that algorithms developed in Asia did not demonstrate the same “dramatic” difference in false positives between Asian and Caucasian faces. Grother said that although the study does not explore the causes behind the false positives, the issue could be that American algorithms are using data sets with primarily Caucasian faces to train their facial recognition systems, making it difficult for those algorithms to accurately identify people of color.

“These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” Grother said.

On a FBI database of 1.6 million domestic mugshots, the report found higher rates of false positives for African-American women. The accuracy issue for law enforcement particularly concerns civil liberties groups who argue that the facial recognition algorithms, still in their infancy, could lead to false accusations, arrests and potential imprisonment.

“One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the F.B.I., Customs and Border Protection and local law enforcement must immediately halt the deployment of this dystopian technology.”

The study was published as towns and states across the country consider issuing moratoriums on government use of facial recognition. California will implement a three-year moratorium starting in 2020, and towns in Massachusetts have banned law enforcement use of the systems.

Meanwhile, U.S. Customs and Border Protection was pressured to drop plans to expand mandatory facial recognition scans to Americans entering and exiting the country. The practice is already standard for foreign travelers coming into and leaving the U.S.

About the Author

Haley Samsel is an Associate Content Editor for the Infrastructure Solutions Group at 1105 Media.

Featured

  • Security Today Announces The Govies Government Security Award Winners for 2025

    Security Today is pleased to announce the 2025 winners in The Govies Government Security Awards. The awards honor outstanding government security products in a variety of categories. Read Now

  • Survey: 60 Percent of Organizations Using AI in IT Infrastructure

    Netwrix, a cybersecurity provider focused on data and identity threats, today announced the release of its annual global 2025 Cybersecurity Trends Report based on a global survey of 2,150 IT and security professionals from 121 countries. It reveals that 60% of organizations are already using artificial intelligence (AI) in their IT infrastructure and 30% are considering implementing AI. Read Now

  • New Research Reveals Global Video Surveillance Industry Perspectives on AI

    Axis Communications, the global industry leader in video surveillance, has released its latest research report, ‘The State of AI in Video Surveillance,’ which explores global industry perspectives on the use of AI in the security industry and beyond. The report reveals current attitudes on AI technologies thanks to in-depth interviews with AI experts from Axis’ global network and a comprehensive survey of more than 5,800 respondents, including distributors, channel partners, and end customers across 68 countries. The resulting insights cover AI integration and the opportunities and challenges that exist with regard to security, safety, business intelligence, and operational efficiency. Read Now

  • SIA Urges Tariff Relief for Security Industry Products

    Today, the Security Industry Association has sent a letter to U.S. Trade Representative Jamieson Greer and U.S. Secretary of Commerce Howard Lutnick requesting relief from tariffs for security industry products and asking that the Trump administration formulate a process that allows companies to apply for product-specific exemptions. The security industry is an important segment of the U.S. economy, contributing over $430 billion in total economic impact and supporting over 2.1 million jobs. Read Now

  • Report Shows Cybercriminals Continue Pivot to Stealthier Tactics

    IBM recently released the 2025 X-Force Threat Intelligence Index highlighting that cybercriminals continued to pivot to stealthier tactics, with lower-profile credential theft spiking, while ransomware attacks on enterprises declined. IBM X-Force observed an 84% increase in emails delivering infostealers in 2024 compared to the prior year, a method threat actors relied heavily on to scale identity attacks. Read Now

New Products

  • Camden CM-221 Series Switches

    Camden CM-221 Series Switches

    Camden Door Controls is pleased to announce that, in response to soaring customer demand, it has expanded its range of ValueWave™ no-touch switches to include a narrow (slimline) version with manual override. This override button is designed to provide additional assurance that the request to exit switch will open a door, even if the no-touch sensor fails to operate. This new slimline switch also features a heavy gauge stainless steel faceplate, a red/green illuminated light ring, and is IP65 rated, making it ideal for indoor or outdoor use as part of an automatic door or access control system. ValueWave™ no-touch switches are designed for easy installation and trouble-free service in high traffic applications. In addition to this narrow version, the CM-221 & CM-222 Series switches are available in a range of other models with single and double gang heavy-gauge stainless steel faceplates and include illuminated light rings.

  • Compact IP Video Intercom

    Viking’s X-205 Series of intercoms provide HD IP video and two-way voice communication - all wrapped up in an attractive compact chassis.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.