facial recognition system

Majority of Facial Recognition Systems Are Less Accurate For People of Color, Federal Study Finds

Native Americans had the highest rates of false positives, while African-American women were most likely to be misidentified in a law enforcement database.

A new study, released by the National Institute of Standards and Technology (NIST) on Thursday, finds that a majority of commercial facial recognition systems are less accurate when identifying people of color, particularly African-Americans, Native Americans and Asians.

The federal agency conducted tests on 189 facial recognition algorithms from 99 developers, including systems from Microsoft, Chinese intelligence company Megvii and more. Systems from Amazon, Apple, Facebook and Google were not tested because none of the companies submitted algorithms to the study, according to The New York Times.

Algorithms developed in the U.S. showed high rates of false positives relative to images of white people, with Native Americans having the highest rates of false positives.

“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” Patrick Grother, a NIST computer scientist and the report’s primary author, said in a statement. “While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”

Notably, the study found that algorithms developed in Asia did not demonstrate the same “dramatic” difference in false positives between Asian and Caucasian faces. Grother said that although the study does not explore the causes behind the false positives, the issue could be that American algorithms are using data sets with primarily Caucasian faces to train their facial recognition systems, making it difficult for those algorithms to accurately identify people of color.

“These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” Grother said.

On a FBI database of 1.6 million domestic mugshots, the report found higher rates of false positives for African-American women. The accuracy issue for law enforcement particularly concerns civil liberties groups who argue that the facial recognition algorithms, still in their infancy, could lead to false accusations, arrests and potential imprisonment.

“One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the F.B.I., Customs and Border Protection and local law enforcement must immediately halt the deployment of this dystopian technology.”

The study was published as towns and states across the country consider issuing moratoriums on government use of facial recognition. California will implement a three-year moratorium starting in 2020, and towns in Massachusetts have banned law enforcement use of the systems.

Meanwhile, U.S. Customs and Border Protection was pressured to drop plans to expand mandatory facial recognition scans to Americans entering and exiting the country. The practice is already standard for foreign travelers coming into and leaving the U.S.

About the Author

Haley Samsel is an Associate Content Editor for the Infrastructure Solutions Group at 1105 Media.

Featured

  • AI Is Now the Leading Cybersecurity Concern for Security, IT Leaders

    Arctic Wolf recently published findings from its State of Cybersecurity: 2025 Trends Report, offering insights from a global survey of more than 1,200 senior IT and cybersecurity decision-makers across 15 countries. Conducted by Sapio Research, the report captures the realities, risks, and readiness strategies shaping the modern security landscape. Read Now

  • Analysis of AI Tools Shows 85 Percent Have Been Breached

    AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk. Read Now

  • Software Vulnerabilities Surged 61 Percent in 2024, According to New Report

    Action1, a provider of autonomous endpoint management (AEM) solutions, today released its 2025 Software Vulnerability Ratings Report, revealing a 61% year-over-year surge in discovered software vulnerabilities and a 96% spike in exploited vulnerabilities throughout 2024, amid an increasingly aggressive threat landscape. Read Now

  • Motorola Solutions Named Official Safety Technology Supplier of the Ryder Cup through 2027

    Motorola Solutions has today been named the Official Safety Technology Supplier of the 2025 and 2027 Ryder Cup, professional golf’s renowned biennial team competition between the United States and Europe. Read Now

  • Evolving Cybersecurity Strategies

    Organizations are increasingly turning their attention to human-focused security approaches, as two out of three (68%) cybersecurity incidents involve people. Threat actors are shifting from targeting networks and systems to hacking humans via social engineering methods, living off human errors as their most prevalent attack vector. Whether manipulated or not, human cyber behavior is leveraged to gain backdoor access into systems. This mainly results from a lack of employee training and awareness about evolving attack techniques employed by malign actors. Read Now

New Products

  • Automatic Systems V07

    Automatic Systems V07

    Automatic Systems, an industry-leading manufacturer of pedestrian and vehicle secure entrance control access systems, is pleased to announce the release of its groundbreaking V07 software. The V07 software update is designed specifically to address cybersecurity concerns and will ensure the integrity and confidentiality of Automatic Systems applications. With the new V07 software, updates will be delivered by means of an encrypted file.

  • Camden CV-7600 High Security Card Readers

    Camden CV-7600 High Security Card Readers

    Camden Door Controls has relaunched its CV-7600 card readers in response to growing market demand for a more secure alternative to standard proximity credentials that can be easily cloned. CV-7600 readers support MIFARE DESFire EV1 & EV2 encryption technology credentials, making them virtually clone-proof and highly secure.

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities