First, Do No Harm: Responsibly Applying Artificial Intelligence

It was 2022 when early LLMs (Large Language Models) brought the term “AI” into mainstream public consciousness and since then, we’ve seen security corporations and integrators attempt to develop their solutions and sales pitches around the biggest tech boom of the 21st century. However, not all “artificial intelligence” is equally suitable for security applications, and it’s essential for end users to remain vigilant in understanding how their solutions are utilizing AI.

When a vendor raises the topic of their solution’s new AI capabilities, reactions from potential customers will likely be mixed, depending on their corporate policies, protocols, and personal experiences. Some end users are eager adopters, persuaded by the real benefits they’ve seen from other AI-based technologies, while others are hesitant; their minds filled with memories of hallucinated answers from LLMs. Most are somewhere in-between, aware that AI is bringing real value to many but also nervous about its potential hazards. Few customers are more justified in this nervousness than those in the security industry, where hallucinations and bad data could lead to dangerous complacency, unpredictable security, missed threats – and grave consequences. When trying to understand the potential risks of artificial intelligence for your security ecosystem, it may be helpful to evaluate it across three categories: the developmental, the surface-level, and the open-ended.

Developmental AI is the use of machine learning, deep learning, or other artificial intelligence technologies to build the solution before it reaches the end user. When used for R&D, AI is an incredible tool to speed up and refine painstaking testing and data crunching. This application of AI is considered generally “safe and good” - nearly unanimously well-regarded in the security industry. We say “safe” because it is monitored and vetted by Subject Matter Experts (SMEs) who can confirm the reliability of its outputs, and “good” because it greatly enhances the power of engineering teams to perform rapid iteration on otherwise painstaking tests as well as enhancing the number of scenarios they can use to test their products. While products that use developmental AI can reasonably be called “AI-powered,” they have steady, tested algorithms that are usually locked in to produce the consistent baseline that security customers expect from a product designed for reliable protection.

Surface-level AI, by contrast, operates on the surface of the technology in question. “Surface-level” does not mean unhelpful or lacking sophistication – it just means that its functionality operates away from the fundamental algorithms of the security system that form the baseline for operational capabilities, at the level of user experience. This is the optimal home for the versions of AI that have captured the imagination of the public – LLMs and generative AI. These systems can be useful for reporting and many end-user facing functions, serving as an instant-response help desk to make technical interfaces easier to navigate. Even if this AI can take actions on behalf of the end user, it cannot alter the baseline or create drift in the effectiveness of the technology over time. A safe and good superficial AI in a security product heightens ease of use and accessibility without compromising the fundamentals of detection, deterrence, or data integrity.

Open-ended AI is where a security system can get into trouble. Open-ended AI is when the review and development process of AI deployment extends into the world of the end user, effectively using them for ongoing refinement of baseline performance or data collection. End users, however- unlike the engineers that develop these systems - are not SMEs. They are experts in their own fields, but they are ill-equipped to manage data classification and analysis tasks. So when systems begin inviting them to classify identified threats or nonthreats on the promise of evolving security, the best outcome is that customer inputs are discarded. One dangerous possibility is that the vendor is using the end user’s self-report to alter their detection algorithms, which could cause a decay in system effectiveness over time as the user’s results drift further from the baseline. Another is that the vendor is using their customers’ self-reported data to train future products, which means that poorly vetted data is now threatening the baseline itself.

The key to knowing whether AI in your security systems is good is understanding how it affects the baseline. The baseline can have many adjustable parameters, but the baseline performance is the performance that was used to pass any security certifications that technology has, the performance that was approved by the vendor’s engineers before it left the factory floor, and the performance that all of the vendor’s security promises to you at point of sale are based on. If it changes, you may have evolved your security, but you will lose the ability to trust your security. The ramifications for safety and liability are obvious. When it comes to security, let the innovations of AI make your life easier and your technology better, but don’t let it undermine the consistency you depend on to keep your facility and people safe.

Featured

New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.

  • FEP GameChanger

    FEP GameChanger

    Paige Datacom Solutions Introduces Important and Innovative Cabling Products GameChanger Cable, a proven and patented solution that significantly exceeds the reach of traditional category cable will now have a FEP/FEP construction.

  • ResponderLink

    ResponderLink

    Shooter Detection Systems (SDS), an Alarm.com company and a global leader in gunshot detection solutions, has introduced ResponderLink, a groundbreaking new 911 notification service for gunshot events. ResponderLink completes the circle from detection to 911 notification to first responder awareness, giving law enforcement enhanced situational intelligence they urgently need to save lives. Integrating SDS’s proven gunshot detection system with Noonlight’s SendPolice platform, ResponderLink is the first solution to automatically deliver real-time gunshot detection data to 911 call centers and first responders. When shots are detected, the 911 dispatching center, also known as the Public Safety Answering Point or PSAP, is contacted based on the gunfire location, enabling faster initiation of life-saving emergency protocols.