Three Critical Questions

Three Critical Questions

Knowledge you need when evaluating an AI-driven video analytics solution

It was never really a question “if” video analytics technology would live up to its promise as “the next big thing” in physical security; it was simply a matter as to when the industry would start adapting the solution en masse.

Apparently, that time is very near. According to Deloitte’s State of AI in the Enterprise 5th Edition Report published in October, 94% of business leaders surveyed feel that AI is critical to the future success of their organizations. IBM’s Global AI Adoption Index 2022 goes on further to report that 35% of companies surveyed reported that they are already using AI today, and that an additional 42% reported they are exploring AI for future deployment.

With AI changing the way large-scale corporations implement and streamline their processes, measures, and protocols, it is evident that AI technology will continue to play a large role across the enterprise.

None of this comes as a surprise for security professionals who are actively exploring the far-reaching capabilities, accuracy and performance of new Wave 2 video analytics to deliver higher levels of analysis and understanding of event detection, classification, tracking, and forensics. And as one would expect, such intense interest in video analytics comes along with a heightened level of competition and performance claims, many of which are misleading at best, and have a potential to reintroduce the same level of skepticism that dogged video analytics for years.

This makes the process of vetting the best possible video analytics solution a critical task, one that starts with asking the right questions. To get the process started, here are three fundamental questions you should ask every AI video analytics provider to help gain a better understanding of their specific solution.

Where does your analytics training data come from? AI-based analytics rely upon models that use training data that learns patterns used to perform a number of different tasks including image detection, recognition, classification and more. To ensure that systems are accurate and effective, these patterns must have a strong correlation to data analyzed in the real world. An analytics solution that lacks a homogenous distribution in terms of the quantity and quality of these patterns, ultimately results in suboptimal performance.

One common issue related to training video analytics to detect specific events stems from the use of biased data sources. Reducing the effects of biases can help mitigate any unnecessary negative effects on people affected by the AI technology itself. For example, training models that use publicly available images to establish their face recognition models result in thousands of shots of people who are often in the public eye such as sports stars, politicians and actors, who may not represent what "average" human being like you and me.

Eliminating these potential analytics biases requires the proper training of AI algorithms to minimize human and systemic influences. This requires the development of algorithms that consider several different factors beyond the scope of the technology itself. This form of synthetic data training enables algorithms to create any desired detection scenario, free from nuances.

In addition, with many open-source computer vision algorithms designed for generic applications, they are incapable of automatically identifying when a very specific event takes place. A good example is the ability to detect when an expensive piece of equipment like a neo-natal ultrasound unit removed from a designated area. If the video analytics solution has been adequately “trained,” it will autonomously detect such instances and alert operations and/or security staff that the unit has been removed from a sanctioned area. The same analytics can be used to help locate the equipment’s whereabouts so it can be retrieved and placed where it belongs.

A more generic form of analytics such as object detection cannot efficiently be implemented for such a highly specific application. Some form of synthetic analytics training is required for such levels of specialization. Gartner projects that by 2024, 60% of the data used for the development and training of AI and analytics projects will be synthetically generated. Hence, knowing the source of the training data driving a video analytics solution is a critical evaluation criterion in determining its ability to detect specific anomalies.

From where did your model architecture come – was it open-source code or written by the provider? Using a purpose-built video analytics solution based on computational efficiency and accuracy provides the metrics and data needed to scale security applications quickly and easily in a hyper-efficient manner. Computing data efficiency and accuracy is measured by using a number of standardized validation metrics, including Common Objects in Context (COCO) or Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) datasets. Think of these testing standards as the equivalent to requesting fuel efficiency in a recently purchased vehicle, or using Bayes’ theorem to test the accuracy in diagnosing medical procedures.

COCO is a large-scale object detection, segmentation and captioning testing standard that uses high-quality datasets for computer vision created with the sole goal of advancing image recognition using state-of-the-art neural networks. Testing standards are used as training datasets for image segmentation into deep learning models, and as a benchmark for comparing the performance of real-time object detection. Datasets are used to train and measure algorithms designed to perform tasks such as object detection, object instance segmentation and stuff segmentation.

KITTI is another data-validation technique tool that is popular within datasets for use in mobile robotics and autonomous driving. Using nearly 130,000 images, the KITTI benchmark contains many tasks such as stereo, optical flow, and visual odometry to help validate large datasets.

As the gold standards of video analytics measurement, COCO and KITTI can be used to ensure datasets are efficient and accurate prior to implementing hyper-efficient scalability. Using purpose-built solutions created with COCO and KITTI datasets ensures that a video analytics solution can be easily scaled for various applications. Such testing standards are being applied to validate new Wave 2 video analytics that employ synthetic, high-quality training data.

Such powerful new Wave 2 video analytics can be used in new ways to facilitate the deployment of accurate, efficient and scalable AI algorithms for specific analytics applications. Consequently, new Wave 2 video analytics consistently outperform open-source models such as YOLO and SSD, providing faster, more-accurate and more-scalable video analytics solutions for specific security and business intelligence applications.

How do you measure video analytics performance? The ability to determine performance is based on accuracy: how many individuals and/or events were properly detected and identified over a specified time. This applies to both new events and recurring events, such that a blue golf cart is always a blue golf cart with two golfers and so on.

Once a specific object has been detected, the video analytics machine learning architecture should then be able to provide more details about what is going on in the scene. This includes extracting fine information such as an individual’s gender, the type of vehicle and its specific color, as well as the ability to track specific individuals and/or objects in a given scene and across multiple scenes over time. This allows the creation of advanced knowledge graphs that correlate people with objects in space-time domains, providing a new level of insight and event analysis.

By associating a unique digital signature to each object detected, new Wave 2 video analytics employ a deep learning model trained to detect changes in illumination, angles, fields of view, resolution, body positions and poses, weather conditions, etc. This means that two detections of the same object/person/face that are captured by two different cameras can correlate the two otherwise different signatures. This allows the video analytics solution to use samples that analyze new training without adding extra computational time. This provides new Wave 2 analytics with a smarter approach to training data that is faster and more accurate for professional security and business intelligence applications.

Although the science of AI-driven video analytics has been around for many years, it continues to rapidly develop and mature for real-world applications, creating high demand and interest, and lots of confusion. Although the three relatively simple questions raised here demand somewhat complex answers, they set the stage for when evaluating and comparing different solutions. The video analytics provider that takes the time to delve into these issues with documentation, and proof of performance examples, is the one that you should trust to deliver the best return on your security investment.

Featured

  • Survey: Less Than Half of IT Leaders are Confident in their IoT Security Plans

    Viakoo recently released findings from its 2024 IoT Security Crisis: By the Numbers. The survey uncovers insights from IT and security executives, exposes a dramatic surge in enterprise IoT security risks, and highlights a critical missing piece in the IoT security technology stack. The clarion call is clear: IT leaders urgently need to secure their IoT infrastructure one application at a time in an automated and expeditious fashion. Read Now

  • ASIS International and SIA Release “Complexities in the Global Security Market: 2024 Through 2026”

    ASIS International and the Security Industry Association (SIA) – the leading security associations for the security industry – have released ”Complexities in the Global Security Market: 2024 Through 2026”, a new research report that provides insights into the equipment, technologies, and employment of the global security industry, including regional market breakouts. SIA and ASIS partnered with global analytics and advisory firm Omdia to complete the research. Read Now

  • President Biden Issues Executive Order to Bolster U.S Port Cybersecurity

    On Wednesday, President Biden issued an Executive Order to bolster the security of the nation’s ports, alongside a series of additional actions that will strengthen maritime cybersecurity and more Read Now

  • Report: 15 Percent of All Emails Sent in 2023 Were Malicious

    VIPRE Security Group recently released its report titled “Email Security in 2024: An Expert Look at Email-Based Threats”. The 2024 predictions for email security in this report are based on an analysis of over 7 billion emails processed by VIPRE worldwide during 2023. This equates to almost one email for everyone on the planet. Of those, roughly 1 billion (or 15%) were malicious. Read Now

Featured Cybersecurity

Whitepapers

New Products

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.” 3

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities 3

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area. 3