Facial Recognition Database Facing Potential Legal Action For Using Photos, Many of Children, Without Permission

Facial Recognition Database Facing Potential Legal Action For Using Photos, Many of Children, Without Permission

The massive MegaFace dataset may have violated the Illinois Biometric Information Privacy Act, a 2008 law that protects residents from using facial scans without their permission.

A facial recognition database holding more than 4 million photos of nearly 700,000 people is undergoing new scrutiny for its use of photos from Flickr without the express permission of users. 

A New York Times report explores the relationship between the progress of surveillance technology and the availability of huge databases of facial photos on the web, including MegaFace. 

MegaFace was developed by computer science professors at the University of Washington and consisted of downloaded versions of photos from the Yahoo Flickr Creative Commons 100 Million Dataset. The project was part of an effort to make it easier for smaller companies and researchers to further their development of facial recognition technology, among other goals. 

The Yahoo database did not distribute users’ photos directly. Rather, links to the photos were shared so that if a user deleted the posts or made them private, researchers would no longer have access to them. 

But MegaFace made the photo sets downloadable, making it easier for companies to download the data and use it for research purposes. The Flickr dataset was ideal because it had many photos of children, which facial recognition systems typically have a difficult time identifying accurately. 

The University of Washington went on to host the “MegaFace Challenge” in 2015 and 2016, asking companies working on facial recognition to use the data to test the accuracy of their systems. More than 100 organizations and companies participated, the Times reported, including Google, SenseTime and NtechLab. All companies were asked to agree to use it only for “noncommercial research and educational purposes,” and some businesses said they deleted the dataset after the challenge. 

Now, many of the people who posted photos of their children to the site say they were unaware that their children’s faces had been used to develop facial recognition technology. The data set was not anonymized, meaning that the Times was able to find people who had posted the photos through the links provided by Yahoo. 

“The reason I went to Flickr originally was that you could set the license to be noncommercial,” Nick Alt, an entrepreneur in Los Angeles, told the Times after finding out that photos he had taken of children were in the database. “Absolutely would I not have let my photos be used for machine-learning projects. I feel like such a schmuck for posting that picture. But I did it 13 years ago, before privacy was a thing.”

Most people included in the database were not legally required to grant permission to use their photos because they were licensed under Creative Commons. But residents of Illinois are protected under the Biometric Information Privacy Act, a 2008 law that imposes fines for using someone’s fingerprints or face scans without consent. 

The use of Illinois Flickr users’ photos could lead to legal implications if residents decide to pursue lawsuits. Photos themselves are not covered by the law, but scans of the photos should be, according to Faye Jones, a law professor at the University of Illinois. 

“Using that in an algorithmic contest when you haven’t notified people is a violation of the law,” Jones said, adding that people who had their faceprints used without permission have the right to sue and earn $1,000 per use. That fine could go up to $5,000 if the use was “reckless.” 

The combined liability could add up to more than a billion dollars, the Times reported. 

“The law’s been on the books in Illinois since 2008 but was basically ignored for a decade,” Jeffrey Widman, an attorney in Chicago, told the Times. “I guarantee you that in 2014 or 2015, this potential liability wasn’t on anyone’s radar. But the technology has now caught up with the law.”

 

About the Author

Haley Samsel is an Associate Content Editor for the Infrastructure Solutions Group at 1105 Media.

Featured

  • AI Is Now the Leading Cybersecurity Concern for Security, IT Leaders

    Arctic Wolf recently published findings from its State of Cybersecurity: 2025 Trends Report, offering insights from a global survey of more than 1,200 senior IT and cybersecurity decision-makers across 15 countries. Conducted by Sapio Research, the report captures the realities, risks, and readiness strategies shaping the modern security landscape. Read Now

  • Analysis of AI Tools Shows 85 Percent Have Been Breached

    AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk. Read Now

  • Software Vulnerabilities Surged 61 Percent in 2024, According to New Report

    Action1, a provider of autonomous endpoint management (AEM) solutions, today released its 2025 Software Vulnerability Ratings Report, revealing a 61% year-over-year surge in discovered software vulnerabilities and a 96% spike in exploited vulnerabilities throughout 2024, amid an increasingly aggressive threat landscape. Read Now

  • Motorola Solutions Named Official Safety Technology Supplier of the Ryder Cup through 2027

    Motorola Solutions has today been named the Official Safety Technology Supplier of the 2025 and 2027 Ryder Cup, professional golf’s renowned biennial team competition between the United States and Europe. Read Now

  • Evolving Cybersecurity Strategies

    Organizations are increasingly turning their attention to human-focused security approaches, as two out of three (68%) cybersecurity incidents involve people. Threat actors are shifting from targeting networks and systems to hacking humans via social engineering methods, living off human errors as their most prevalent attack vector. Whether manipulated or not, human cyber behavior is leveraged to gain backdoor access into systems. This mainly results from a lack of employee training and awareness about evolving attack techniques employed by malign actors. Read Now

New Products

  • Compact IP Video Intercom

    Viking’s X-205 Series of intercoms provide HD IP video and two-way voice communication - all wrapped up in an attractive compact chassis.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.