Survey: 62% of Respondents Don't Care, or Aren't Sure if they Care, if the AI Used in their Video Security is Biased
Pro-Vigil, a provider of remote video monitoring, management and crime deterrence solutions, recently published a research report that found organizations are more concerned about their Artificial Intelligence (AI)-powered video surveillance system's ability to deter crime than any potential bias issues.
Pro-Vigil surveyed 100 users of digital video surveillance across a variety of commercial vertical markets to gain an understanding of people's knowledge of AI and how it's being used in their video surveillance systems, as well as their opinions around AI bias. The company found:
- 62% of respondents said they either don't care or aren't sure if they care if their AI is biased.
- When asked if they would do anything if their AI video system was doing a good job deterring crime, but was using unethical algorithms, more than one-third (37%) of respondents said they would do nothing.
- Most survey respondents understood whether or not their video surveillance systems were using AI. Most (64%) indicated they weren't using AI, while 21% said they were using AI. The rest were unsure.
- 26% indicated there is a person in their organization who is responsible for understanding how AI is used. The rest either didn't know or said there was no such person.
- Nearly 90% said they would not know how to check to see if their AI video surveillance system was biased.
To download Pro-Vigil's research report, "Perceptions of Artificial Intelligence in Video Surveillance," please visit: https://pro-vigil.com/ai-survey/.