Board Approves Detroit Police

Board Approves Detroit Police's Limited Use of Facial Recognition Software

Detroit police will continue to use the software to identify violent crime suspects but are prohibited from using it on live or recorded video.

Following months of controversy over the Detroit police department’s use of facial recognition software on residents, the Detroit Board of Police Commissioners approved a revised policy on Thursday that allows the department to continue using the technology with restrictions.

The policy, submitted by the police department to the advisory board, limits the use of facial recognition to still images of people suspected of violent crimes or home invasion. In addition, police cannot use the software on live or recorded video, or to determine a person’s immigration status.

Lisa Carter, the chairwoman of the board, said that the proposal incorporated at least 23 changes suggested by residents and critics of the department. Some of those included not sharing photos in the facial recognition database with private companies and stronger penalties for officers who are found to have misused the tech.

Revised Facial Recognition ... by Mark J. Rochester on Scribd

“I believe the prohibitions contained in the revised directive address many of the concerns raised by the public," Carter said, according to The Detroit Free Press.

"The revised directive is not a complete ban on the use of facial recognition,” she added. “The revised directive gives clear direction and lines of authority to the department as to when and how such technology can and cannot be used."

Critics say that any use of facial recognition tech by police is unacceptable given studies that show the software is less accurate in identifying people of color and women. The public did not have the chance to address the board before the vote, further angering attendees who were against the policy.

Willie Burton, a member of the police commissioners board who voted against the policy, said that Detroit should be the last city to implement facial recognition. About 80 percent of Detroit’s residents are black, a higher percentage than any other large city in the U.S., according to NBC News.

“I feel like the technology itself is techno-racism," Burton said. "With this technology, everyone looks alike."

Police Chief James Craig applauded the board for approving the policy and noted that the department has committed to not arrest solely based on a facial recognition match. Instead, officers must have other evidence connecting the suspect to the crime in addition to the match.

“Just like a few nights ago, we had a robbery. It would’ve been a whodunnit, and had it not been for the use of that technology, an armed robbery, we would not have identified that suspect,” Craig told the Free Press. “We cannot arrest solely on the identification from facial recognition. We can’t do it, and we’re not going to do it.”

He said that the main purpose of the software is to support victims and their families by identifying criminals as fast as possible. But civil rights groups fear that the technology could eventually be used to surveil the public (which is banned in the revised policy) and have a disparate impact on people of color, particularly black residents.

"At its core, facial recognition is a way to do mass profiling, which is the last thing a majority black city needs," Amanda Alexander, Detroit Justice Center executive director, said in a statement. "Rather than investing millions of dollars in facial recognition technology that instills fear and targets communities of color, we should be investing in services and resources so that people can prosper."

The department had been using the software for about two years before submitting a policy to the oversight board. In 2017, Detroit police installed facial recognition for the city’s Project Greenlight system, a public-private initiative that encouraged businesses to purchase video surveillance cameras in an effort to deter crime.

After the vote, Craig said that the debate focuses too little on “violent, predatory suspects” who shoot and kill people.

“We use the technology constitutionally, effectively, and it aids us in identifying that violent suspect,” Craig said, according to Michigan Radio.

Amanda Hill, a 28-year-old member of the Green Light Black Futures organization that aims to end the Greenlight system in the city, said that facial recognition does not make her community safer.

“Facial recognition technology does not keep us safe," Hill told the Free Press. "Healthy communities require investment — not over-policing, not surveillance."

Featured

  • Report: 47 Percent of Security Service Providers Are Not Yet Using AI or Automation Tools

    Trackforce, a provider of security workforce management platforms, today announced the launch of its 2025 Physical Security Operations Benchmark Report, an industry-first study that benchmarks both private security service providers and corporate security teams side by side. Based on a survey of over 300 security professionals across the globe, the report provides a comprehensive look at the state of physical security operations. Read Now

    • Guard Services
  • Identity Governance at the Crossroads of Complexity and Scale

    Modern enterprises are grappling with an increasing number of identities, both human and machine, across an ever-growing number of systems. They must also deal with increased operational demands, including faster onboarding, more scalable models, and tighter security enforcement. Navigating these ever-growing challenges with speed and accuracy requires a new approach to identity governance that is built for the future enterprise. Read Now

  • Eagle Eye Networks Launches AI Camera Gun Detection

    Eagle Eye Networks, a provider of cloud video surveillance, recently introduced Eagle Eye Gun Detection, a new layer of protection for schools and businesses that works with existing security cameras and infrastructure. Eagle Eye Networks is the first to build gun detection into its platform. Read Now

  • Report: AI is Supercharging Old-School Cybercriminal Tactics

    AI isn’t just transforming how we work. It’s reshaping how cybercriminals attack, with threat actors exploiting AI to mass produce malicious code loaders, steal browser credentials and accelerate cloud attacks, according to a new report from Elastic. Read Now

  • Pragmatism, Productivity, and the Push for Accountability in 2025-2026

    Every year, the security industry debates whether artificial intelligence is a disruption, an enabler, or a distraction. By 2025, that conversation matured, where AI became a working dimension in physical identity and access management (PIAM) programs. Observations from 2025 highlight this turning point in AI’s role in access control and define how security leaders are being distinguished based on how they apply it. Read Now

New Products

  • HD2055 Modular Barricade

    Delta Scientific’s electric HD2055 modular shallow foundation barricade is tested to ASTM M50/P1 with negative penetration from the vehicle upon impact. With a shallow foundation of only 24 inches, the HD2055 can be installed without worrying about buried power lines and other below grade obstructions. The modular make-up of the barrier also allows you to cover wider roadways by adding additional modules to the system. The HD2055 boasts an Emergency Fast Operation of 1.5 seconds giving the guard ample time to deploy under a high threat situation.

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities

  • 4K Video Decoder

    3xLOGIC’s VH-DECODER-4K is perfect for use in organizations of all sizes in diverse vertical sectors such as retail, leisure and hospitality, education and commercial premises.