First, Do No Harm: Responsibly Applying Artificial Intelligence

It was 2022 when early LLMs (Large Language Models) brought the term “AI” into mainstream public consciousness and since then, we’ve seen security corporations and integrators attempt to develop their solutions and sales pitches around the biggest tech boom of the 21st century. However, not all “artificial intelligence” is equally suitable for security applications, and it’s essential for end users to remain vigilant in understanding how their solutions are utilizing AI.

When a vendor raises the topic of their solution’s new AI capabilities, reactions from potential customers will likely be mixed, depending on their corporate policies, protocols, and personal experiences. Some end users are eager adopters, persuaded by the real benefits they’ve seen from other AI-based technologies, while others are hesitant; their minds filled with memories of hallucinated answers from LLMs. Most are somewhere in-between, aware that AI is bringing real value to many but also nervous about its potential hazards. Few customers are more justified in this nervousness than those in the security industry, where hallucinations and bad data could lead to dangerous complacency, unpredictable security, missed threats – and grave consequences. When trying to understand the potential risks of artificial intelligence for your security ecosystem, it may be helpful to evaluate it across three categories: the developmental, the surface-level, and the open-ended.

Developmental AI is the use of machine learning, deep learning, or other artificial intelligence technologies to build the solution before it reaches the end user. When used for R&D, AI is an incredible tool to speed up and refine painstaking testing and data crunching. This application of AI is considered generally “safe and good” - nearly unanimously well-regarded in the security industry. We say “safe” because it is monitored and vetted by Subject Matter Experts (SMEs) who can confirm the reliability of its outputs, and “good” because it greatly enhances the power of engineering teams to perform rapid iteration on otherwise painstaking tests as well as enhancing the number of scenarios they can use to test their products. While products that use developmental AI can reasonably be called “AI-powered,” they have steady, tested algorithms that are usually locked in to produce the consistent baseline that security customers expect from a product designed for reliable protection.

Surface-level AI, by contrast, operates on the surface of the technology in question. “Surface-level” does not mean unhelpful or lacking sophistication – it just means that its functionality operates away from the fundamental algorithms of the security system that form the baseline for operational capabilities, at the level of user experience. This is the optimal home for the versions of AI that have captured the imagination of the public – LLMs and generative AI. These systems can be useful for reporting and many end-user facing functions, serving as an instant-response help desk to make technical interfaces easier to navigate. Even if this AI can take actions on behalf of the end user, it cannot alter the baseline or create drift in the effectiveness of the technology over time. A safe and good superficial AI in a security product heightens ease of use and accessibility without compromising the fundamentals of detection, deterrence, or data integrity.

Open-ended AI is where a security system can get into trouble. Open-ended AI is when the review and development process of AI deployment extends into the world of the end user, effectively using them for ongoing refinement of baseline performance or data collection. End users, however- unlike the engineers that develop these systems - are not SMEs. They are experts in their own fields, but they are ill-equipped to manage data classification and analysis tasks. So when systems begin inviting them to classify identified threats or nonthreats on the promise of evolving security, the best outcome is that customer inputs are discarded. One dangerous possibility is that the vendor is using the end user’s self-report to alter their detection algorithms, which could cause a decay in system effectiveness over time as the user’s results drift further from the baseline. Another is that the vendor is using their customers’ self-reported data to train future products, which means that poorly vetted data is now threatening the baseline itself.

The key to knowing whether AI in your security systems is good is understanding how it affects the baseline. The baseline can have many adjustable parameters, but the baseline performance is the performance that was used to pass any security certifications that technology has, the performance that was approved by the vendor’s engineers before it left the factory floor, and the performance that all of the vendor’s security promises to you at point of sale are based on. If it changes, you may have evolved your security, but you will lose the ability to trust your security. The ramifications for safety and liability are obvious. When it comes to security, let the innovations of AI make your life easier and your technology better, but don’t let it undermine the consistency you depend on to keep your facility and people safe.

Featured

  • A Look at AI

    Large language models (LLMs) have taken the world by storm. Within months of OpenAI launching its AI chatbot, ChatGPT, it amassed more than 100 million users, making it the fastest-growing consumer application in history. Read Now

  • First, Do No Harm: Responsibly Applying Artificial Intelligence

    It was 2022 when early LLMs (Large Language Models) brought the term “AI” into mainstream public consciousness and since then, we’ve seen security corporations and integrators attempt to develop their solutions and sales pitches around the biggest tech boom of the 21st century. However, not all “artificial intelligence” is equally suitable for security applications, and it’s essential for end users to remain vigilant in understanding how their solutions are utilizing AI. Read Now

  • Improve Incident Response With Intelligent Cloud Video Surveillance

    Video surveillance is a vital part of business security, helping institutions protect against everyday threats for increased employee, customer, and student safety. However, many outdated surveillance solutions lack the ability to offer immediate insights into critical incidents. This slows down investigations and limits how effectively teams can respond to situations, creating greater risks for the organization. Read Now

  • Security Today Announces 2025 CyberSecured Award Winners

    Security Today is pleased to announce the 2025 CyberSecured Awards winners. Sixteen companies are being recognized this year for their network products and other cybersecurity initiatives that secure our world today. Read Now

  • Empowering and Securing a Mobile Workforce

    What happens when technology lets you work anywhere – but exposes you to security threats everywhere? This is the reality of modern work. No longer tethered to desks, work happens everywhere – in the office, from home, on the road, and in countless locations in between. Read Now

New Products

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.”

  • Automatic Systems V07

    Automatic Systems V07

    Automatic Systems, an industry-leading manufacturer of pedestrian and vehicle secure entrance control access systems, is pleased to announce the release of its groundbreaking V07 software. The V07 software update is designed specifically to address cybersecurity concerns and will ensure the integrity and confidentiality of Automatic Systems applications. With the new V07 software, updates will be delivered by means of an encrypted file.

  • PE80 Series

    PE80 Series by SARGENT / ED4000/PED5000 Series by Corbin Russwin

    ASSA ABLOY, a global leader in access solutions, has announced the launch of two next generation exit devices from long-standing leaders in the premium exit device market: the PE80 Series by SARGENT and the PED4000/PED5000 Series by Corbin Russwin. These new exit devices boast industry-first features that are specifically designed to provide enhanced safety, security and convenience, setting new standards for exit solutions. The SARGENT PE80 and Corbin Russwin PED4000/PED5000 Series exit devices are engineered to meet the ever-evolving needs of modern buildings. Featuring the high strength, security and durability that ASSA ABLOY is known for, the new exit devices deliver several innovative, industry-first features in addition to elegant design finishes for every opening.