The Evolution of Scene Intelligence

The Evolution of Scene Intelligence

How video analytics are becoming smarter and more accurate

Remember the early days of video analytics? All the alerts triggered by a passing shadow? Or leaves quivering in the breeze? Even a car with bright headlights driving by? Because these analytics were based solely on pixel changes, they tended to generate a lot of false alarms. In some cases, the number of false alarms generated by these first analytics became so frustratingly high that some users decided to simply turn them off altogether.

Fast-forward and today you will find that video analytics has come a long way thanks to better image processing and deep-learning software models trained to discern differences between objects and people. This makes it possible for the camera to capture highly-granular metadata – such as the color of clothing, the type of vehicle, the direction an object is traveling – which makes it easier to locate and track the movement of people and objects through a scene, whether in real-time or when searching archived footage.

Providing a Foundation for Deep Learning
How do developers train these more advanced analytics? They are taught by example. The more examples they are given for comparison, the more accurate they become. For instance, for a search algorithm to successfully locate a certain object of a precise color, it needs to be taught to recognize color and how to distinguish the differences between colors. If an algorithm is meant to trigger an alert on a particular type of vehicle, the analytics have to be programmed to discern the differences between motorcycles and bicycles, trucks and cars, buses and mobility scooters. With further training, a model could even learn to recognize and classify specific vehicle models from a chosen manufacturer.

Depending on the complexity of the application and the number of variables the algorithm needs to classify, developers might need to rely on big data sets to train their deep-learning modules. The amount of data sets needed to support the analytics generally determines where the analytics should reside. If the data set is relatively small – such as detecting whether a person is loitering or crossing into a restricted zone – the analytic can reside in-camera.

Placing analytics at the edge reduces latency and delivers greater accuracy since the video does not need to be compressed – and thus possibly lose critical details – when being transmitted to a server for analysis. But a camera’s system chip and deep-learning processor need to be sufficiently robust for the task.

For applications requiring larger data sets – like reading and classifying the differences between state license plates across the country – the analytics would likely reside on a server or in the cloud where it could efficiently compare a nationwide aggregate of motor vehicle data. Or for facial recognition analytics, the camera could capture an image of the face, but the image would likely need to be processed on a local or cloud server housing a large data set of facial images for comparison and ultimate identification.

Integrating Sensors Augments Scene Intelligence
While advanced analytics can greatly enhance situational awareness, integrating intelligent video cameras with other sensor technologies can take that scene intelligence to the next level. For instance, radar can provide another layer of context to the visual data such as the distance and speed of the person or object approaching or departing the scene.

It can provide early detection of an event and direct a camera to automatically track the person or object. Audio detection devices add acoustic intelligence such as the ability to recognize, categorize and alert to the sound of weapon fire, breaking glass, or an aggressive tone in voices. Like radar, audio detection can be used to direct cameras to the location of the sound to visually verify the event.

To achieve that interoperability, however, the physical security ecosystem needs to be built on an open standards platform and support standardized interfaces between devices and analytics. Manufacturers of security equipment usually provide open Application Program Interfaces (APIs) and Software Development Kits (SDKs) to enable these multiple data types to communicate with each other and exchange information.

Automating Alerts and Responses Based on Metadata
The ability to integrate scene intelligence from multiple devices and analytics not only helps to minimize false alarms it also generates a wealth of metadata that the analytics can use to trigger timely alerts and activate specific responses.

The decision tree for action can be programmed quite granularly based on the data the analytics are designed to detect and classify. The simplest action might be to send an event alert message to security operators or officers on patrol. Or, depending on the decision tree, place a 911 call to local responders via an integrated VoIP phone system. But the automated response could also involve triggering an action by other connected technologies in the ecosystem.

For instance, as mentioned earlier, a radar sensor could send a geolocation alert to a camera to track an intruder’s movements. A fence guard analytic could trigger floodlights or a siren if it detects a person or vehicle attempting to enter a restricted area late at night. An audio analytic could initiate an automatic lockdown of all doors when it detects the sound of gunfire. Or, an object analytic could trigger a network speaker to broadcast a pre-recorded message to move a vehicle detected blocking an emergency exit.

Metadata also plays a key role in facilitating searches through live and archived video. In addition to basic timelines, the extensive data being captured by intelligent video makes it possible to specify extremely granular parameters such as a person wearing a blue baseball cap, a green shirt and white shorts, traveling right to left across the field of view. With the advent of natural language queries, these types of searches have become much easier to conduct and yield faster, on-point results.

Applying Scene Intelligence to Unique Applications
What many companies are beginning to realize is that combining intelligent video with other intelligent sensors can serve purposes beyond physical security. In fact, just about any industry – from manufacturing to retail to aviation and education – could glean benefits from more comprehensive scene intelligence.

For instance, video analytics could be linked with thermal camera analytics and industrial temperature gauges to detect and alert on combustible materials or overheating equipment on a factory floor. Video analytics linked with vape detectors can help schools detect and identify students who are vaping on campus.

Hospitals can tie video analytics to access control technology to verify who is accessing the pharmaceuticals in a drug cabinet and trigger an alarm if there’s an identity discrepancy. Weather stations can use video analytics in conjunction with anemometers to track the speed and path of tornados and trigger emergency broadcasts through network speakers to communities in their path.

The flexibility of an open development environment makes it easier for developers to create, train and integrate advanced deep learning modules into the ecosystem to meet customers’ unique needs. With a wealth of manufacturer embedded development tools and third-party programming toolkits at their disposal, they can build their analytics from the ground up or use open-sourced libraries of deep learning modules as building blocks for their customers’ specialized applications.

Influencing Future Direction
While the fusion of advanced video analytics and other sensors can provide exceptional scene intelligence for real-time alerts and action, the wealth of metadata can also help an organization see where they have been and where they are going. The metadata generated by all these devices can be used to measure compliance or the progress of operations optimization, or any other activity along a timeline.

And gaining a handle on where they’ve been and where they are now can help institutions better decide where they should be going.

This article originally appeared in the September / October 2023 issue of Security Today.

Featured

  • DHS Releases Framework for Safe, Secure Deployment of AI in Critical Infrastructure

    The Department of Homeland Security (DHS) released a set of recommendations for the safe and secure development and deployment of Artificial Intelligence (AI) in critical infrastructure, the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” Read Now

  • Making the Grade with Locks and Door Hardware

    Managing and maintaining locks and door hardware across a school district or university campus is a big responsibility. A building’s security needs to change over time as occupancy and use demands evolve, which can make it even more challenging. Knowing the basics of common door hardware, including locks, panic devices and door closers, can make a difference in daily operations and emergency situations. Read Now

  • Choosing the Right Solution

    Today, there is a strong shift from on-prem installations to cloud or hybrid-cloud deployments. As reported in the 2024 Genetec State of Physical Security report, 66% of end users said they will move to managing or storing more physical security in the cloud over the next two years. Read Now

  • New Report Reveals Top Security Risks for U.S. Retail Chains

    Interface Systems, a provider of security, actionable insights, and purpose-built networks for multi-location businesses, has released its 2024 State of Remote Video Monitoring in Retail Chains report. The detailed study analyzed over 2 million monitoring requests across 4,156 retail locations in the United States from September 2023 to August 2024. Read Now

Featured Cybersecurity

Webinars

New Products

  • EasyGate SPT and SPD

    EasyGate SPT SPD

    Security solutions do not have to be ordinary, let alone unattractive. Having renewed their best-selling speed gates, Cominfo has once again demonstrated their Art of Security philosophy in practice — and confirmed their position as an industry-leading manufacturers of premium speed gates and turnstiles. 3

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities 3

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.” 3