A Dose of AI

A Dose of AI

Examining what type of AI exists today and what type can we expect moving forward

What is Artificial Intelligence? We’ve all heard the term, but what does it mean? For some it evokes imagery of a cinematic world’s end; for others, it is the Easy Button come to reality. Artificial Intelligence was originally defined by Stanford University Professor John McCarthy as the science and engineering of making intelligent machines.

The security industry finds itself on a wide scale of adoption. Artificial Intelligence, or AI as it is commonly referred to, is not all the same. AI is found in almost every aspect of the security industry; from the physical devices to the software platforms that run them. AI is a holistic term used to include different training methods, including computer vision (CV), Machine Learning (ML), Neural Networks (NN) and Deep Learning (DL). These methods of training AI all have different outcomes. A keynote here is that not all AI is the same.

There are macro-category stages, and micro types of AI within each stage.

The Stages: There are three macro-category stages of AI

Artificial Narrow Intelligence (ANI). Artificial narrow intelligence, also known as “Weak AI” is AI that can perform a narrow, single focused set of specific tasks. It does not think for itself, it responds to a set of pre-defined training. A good example of this technology is AI-enabled object classification or identification analysis.

Artificial General Intelligence (AGI). Artificial general intelligence, also known as “Strong AI” is AI that can think for itself and make decisions based on artificial thought; removing the need for a human to confirm its alert. ChatGPT and similar AI-enabled bots are the closest to AGI that AI has come to date.

Artificial Super Intelligence (ASI). Artificial super Intelligence is the capability of computer intelligence surpassing human intelligence, so far only seen in cinematic magic; The Terminator, The Matrix, The Avengers Ultron and I Robot to name just a few. The world may not be facing a zombie apocalypse of AI; however, AI has seen more innovation in the last 10 years, than in the last 50 years before.

AI continues to be innovative, solving new and existing problems the world has not thought of yet. The innovative solutions that the security industry has built and continues to build models are providing real-time and forensic solutions to protect people and assets while making life easier and better for the user and their customers. But what type of AI exists today and what type can we expect moving forward? The type of AI programmed into each of these categories will drive the next innovative step.

Types of AI
There are four types of AI that can be found in each of the three categories above.

Reactive Machine AI. Reactive Machine AI is simply that, a reaction based on present data provided where the AI supplies a logical response output. A famous example of this was the IBM Deep Blue chess computer that beat Gary Kasparov.

Limited Memory AI. Limited Memory AI is where the AI starts to make informed decisions based on its training. The spectrum of decision-making can start from basic tasks to improving on the last positive response it had to a task.

Theory of Mind AI. Theory of Mind AI is where emotional intelligence will be brought into the scenario. This level of AI has not been developed yet.

Self-aware AI. Self-Aware AI is where AI reaches sentience; with the capability to feel and register experiences and feelings. Self-Aware AI is a higher-level AI than Theory of Mind and as such is currently only theorized.

Today, most of the AI models require rules-based input to create the desired output, falling into the category of Limited Memory AI. Yes, it will reduce time and increase accuracy, however, this is a long way from sentient AI.

AI Training
All AI has the combination of two key steps, training and inferencing. All AI must first be trained.

Depending on the model this could be simple computer vision or complex deep-learning neural networks. Typically, this shows itself in the level of complexities, identification versus classification of items. Training accuracy is just like training anything else, the accuracy of the data used, and the amount of time used to train the system reflects directly on how well the AI will ultimately work. Bad data in equals bad data out, and AI trained for an insufficient amount of time will have a higher error rate requiring added input once deployed.

What happens when there is not enough training data to make a functional model? This is where companies, and not just start-ups, find themselves not only starting but as they retrain their models over time for accuracy. The more data points, the more accurate the AI. To meet this requirement, these companies may look to use open-source data models or purchase data sources to train their models against.

Once AI is trained; the next step is the AI model inference. The AI model inference is simply the process of inferring data. AI model inferencing is the process of using a trained model to make predictions on new data. Take video analysis AI as an example, where the AI takes what it’s been trained to do, applies logical rules to analyze the scene, and then decides based on the trained data what the analyzed scene should look like. If the trained data is to show cars, trucks, or bicycles; the inference will identify cars, trucks, or bicycles, or not identify an item because it does not fit the identification. The same concept works with AI trained for sounds, biometric data, and even computer logic.

AI ranges from rules-based logic where virtual boundaries are set to guide the AI inference, alerting a human to any anomalies. These anomalies, unchecked, can become learned behavior; requiring an understanding of the scene and diligence by the human to correct the action. The other side of the spectrum is a deep learning AI that learns the scene and makes its calculations, with gradual human interaction for correction, learning a scene and repeatedly getting more accurate.

The Error Rate Question
AI models assume a measure of bias in the model, as the real-world implementations are never the same as lab scenarios. This has led to implementations having unacceptable error rates post implementation, forcing costly fixes or replacements. The error rate of AI has come into question with biometrics, license plate recognition, access control, and more.

Overselling solutions and underperformance of solutions have caused implementations to have unacceptable error rates that have led to underperforming expectations, breaching the trust between customer and analytic(s), or creating other problems when the AI failed to work as sold. These are some of the reasons that have created difficulties in the market for many analytic software packages to gain adoption.

The Legal Question
As AI continues to get more intuitive, the legality of AI will also come into question. This is seen more so in topics such as biometrics; but reflects upon the security industry, as well as the ethics of AI as a whole. The questions are not just around the ethics of AI but also the data privacy, who holds the data, where is the data held, who has access to the data, how will the data be used, and is the data held a protectable interest. These questions are just the beginning of concerns about data privacy and the ethics of AI models.

Currently, there is no approved data set or standard of approved AI models that will regulate bias in training data. There are a few specific testing activities such as in the United States, the National Institute of Standards and Technology (NIST) has an ongoing facial biometric test for algorithm accuracy against a stationary face. Again, biometrics has seen some of the most controversial publicity; but it points to a much larger conversation as two different AI models and two different training sets will output a different accuracy, variance, and acceptable threshold for error.

It should come as no surprise that AI and the use of AI are being considered for regulation. This framework is currently being considered by the European Union (EU); which has enacted the General Data Protection Regulation, known to most as GDPR.

The EU’s Artificial Intelligence Act follows a risk-based approach where legal intervention is based upon the level of risk and aims to explicitly ban harmful AI practices. The framework for this Act was originally introduced in April 2021, and in May 2023 the initial draft for the mandate was approved.

Once approved, they will be the world’s first AI rules, with specific bans on biometric surveillance, emotion recognition and predictive policing AI systems. The draft calls for specific governance on AI models such as ChatGPT. The draft also calls for transparency to ensure “AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly.”

The draft calls for a uniform definition of AI designed to be technology-neutral so that it can apply to the AI systems of today and tomorrow. The governance will be administered through the European Union AI Office; and while this is aimed to protect the members of the EU, the regulation is expected to have far-reaching stipulations that will affect AI globally and in every industry.

This article originally appeared in the July / August 2023 issue of Security Today.


  • Maximizing Your Security Budget This Year

    7 Ways You Can Secure a High-Traffic Commercial Security Gate  

    Your commercial security gate is one of your most powerful tools to keep thieves off your property. Without a security gate, your commercial perimeter security plan is all for nothing. Read Now

  • The Power of a Layered Approach to Safety

    In a perfect world, every school would have an unlimited budget to help secure their schools. In reality, schools must prioritize what budget they have while navigating the complexities surrounding school security and lockdown. Read Now

  • How a Security System Can Enhance Arena Safety and the Fan Experience

    Ensuring guests have both a memorable experience and a safe one is no small feat for your physical security team. Stadiums, ballparks, arenas, and other large event venues are increasingly leveraging new technologies to transform the fan experience and maintain a high level of security. The goal is to preserve the integrity and excitement of the event while enhancing security and remaining “behind the scenes.” Read Now

  • Protecting Data is Critical

    To say that the Internet of Things (IoT) has become a part of everyday life would be a dramatic understatement. At this point, you would be hard-pressed to find an electronic device that is not connected to the internet. Read Now

Featured Cybersecurity


New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area. 3

  • ResponderLink


    Shooter Detection Systems (SDS), an Alarm.com company and a global leader in gunshot detection solutions, has introduced ResponderLink, a groundbreaking new 911 notification service for gunshot events. ResponderLink completes the circle from detection to 911 notification to first responder awareness, giving law enforcement enhanced situational intelligence they urgently need to save lives. Integrating SDS’s proven gunshot detection system with Noonlight’s SendPolice platform, ResponderLink is the first solution to automatically deliver real-time gunshot detection data to 911 call centers and first responders. When shots are detected, the 911 dispatching center, also known as the Public Safety Answering Point or PSAP, is contacted based on the gunfire location, enabling faster initiation of life-saving emergency protocols. 3

  • PE80 Series

    PE80 Series by SARGENT / ED4000/PED5000 Series by Corbin Russwin

    ASSA ABLOY, a global leader in access solutions, has announced the launch of two next generation exit devices from long-standing leaders in the premium exit device market: the PE80 Series by SARGENT and the PED4000/PED5000 Series by Corbin Russwin. These new exit devices boast industry-first features that are specifically designed to provide enhanced safety, security and convenience, setting new standards for exit solutions. The SARGENT PE80 and Corbin Russwin PED4000/PED5000 Series exit devices are engineered to meet the ever-evolving needs of modern buildings. Featuring the high strength, security and durability that ASSA ABLOY is known for, the new exit devices deliver several innovative, industry-first features in addition to elegant design finishes for every opening. 3