Driving New Opportunities

Driving New Opportunities

The advantages of greater processing power at the edge and greater scalability in the cloud

When network video cameras first came on the market, they were chiefly bare bones video streaming devices. Most of the intelligence and processing for the system was housed in the core server farm of the video management system. Within a few years, however, companies were manufacturing cameras with enough CPU power to perform simple analytics at the edge. As computing power continued to increase, so did the opportunity for companies to embed ever more sophisticated analytics in-camera.

There were several benefits that made edge-based analytics appealing:

Lower bandwidth consumption. Instead of streaming every frame of raw video to the server for analysis, the camera could pre-process the images and just send the event footage.

Lower storage requirements. With only content-rich video being sent to the server, there would be less footage to archive in the storage array.

Lower operating costs. Processing video in-camera was less expensive than monopolizing CPU cycles on the server.

The earliest algorithms brought into the camera were based on pixel changes in the field of view. When the changes reached a certain threshold, the analytics would conclude that motion had been detected and would send the video to the server. Building on that pixel threshold concept, other in-camera analytics like camera tampering and crossline detection soon followed.

HOW MACHINE LEARNING IS IMPACTING ADVANCES IN VIDEO

Fast forward to 2020. Manufacturers are building cameras embedded with deep learning processing units (DLPUs), enabling software developers to integrate arti- ficial intelligence (AI) into their video analytics algorithms. This has raised new hopes that machine learning and deep learning will be the silver bullet that the security industry has long been promising. Given the variabilities of surveillance environments, however, fulfilling that promise still has a way to go. That is because machine learning can consume an enormous amount of resources before a consistently accurate result can be achieved.

We’ll use the example of facial recognition. If you wanted to create the application using AI, you would need an iterative process to train the program to classify an image as a face. That would mean collecting and labeling thousands of images of facing and feeding them into the program, then testing the application after each cycle of input until you determined that the program had learned “enough” about what characteristics comprise a face. At that point, the trained model would become the finished program. But after that, the AI wouldn’t learn anything new.

Now consider the challenges of facial recognition from a surveillance camera perspective. Not only do you have to train the program to recognize a full-frontal image, but images captured from multiple angles, images in shadow and bright sunlight and variable weather conditions, images with facial hair, hats, glasses, tattoos and other distinguishing differences. And if the application comes across a novel image for which it has no data points it can reference it could fail to recognize the image as a face.

That’s not to say significant strides have been made since the early days of video analytics. Take, for example, video motion detection. We’ve come a long way from simply detecting pixelchanges in the scene. Today’s motion detection analytics have been designed to recognize patterns. They’re able to filter out non-essential data like shadows, passing objects like cars and branches, the bloom from a headlight, even birds – which leads to significantly fewer false positive alerts.

Other video analytics such as license plate recognition and object classification (like type of car, color, make and model), have also grown in sophistication over the years with the ability to accurately discern and transmit essential data and ignore anything irrelevant to the specific task at hand.

HOW CLOUD COMPUTING EXPANDS POSSIBILITIES

The video analytics industry has burgeoned into a massive ecosystem of problem-solving tools. But to achieve more predictive intelligence, many of these algorithms rely on larger datasets of information and greater processing power to reach an acceptable level of accuracy. This has led many businesses to realize that computing power and datasets at the edge are the core and insufficient to the task. So, they’re turning to a third option for their analytics operations: cloud computing. Using a cloud computing service offers certain advantages that neither the edge nor the core can provide:

Great scalability. Going to a cloud computing model offers almost unlimited processing power and gives users access to large data sets and images to train video analytics algorithms to targeted tasks.

Great fiexibility. Cloud computing service is an elastic solution. Businesses only use provider resources on an as needed basis.

Lower upfront investment. Businesses don’t have to purchase, maintain and update local server resources, which makes it possible for companies with fewer financial means to access virtually unlimited advanced hardware and software resources without a huge capital investment. They can employ video analytics as a service and allocate the expense to their operating budget.

THE MOVE FROM PROPRIETARY TO OPEN STANDARDS DEVELOPMENT TOOLS

In addition to ever greater accuracy, one of the reasons that video analytics are gaining traction is that many of the newer algorithms are hardware agnostic. In the beginning, manufacturers only allowed analytics created by their own in-house software development team to be embedded on their cameras. As the demand for customized solutions grew, manufacturers gradually began opening their products to third-party developers. But, there was a caveat. For the applications to run on those cameras, these outside developers had to use the manufacturer’s own proprietary application development tools and platform. With few exceptions, this generally constrained an application’s usefulness to a single manufacturer’s product line.

With the rise in the Internet of Things and best-of-breed, mixedvendor ecosystems, this position was no longer sustainable as it was limiting users’ ability to grow their systems. Today there’s a big push for open source development tools based on industry standard application programming interfaces. The goal would be to create a common development framework that would support deploying the video analytic to multiple tiers. In other words, any analytics software written within this framework would be interoperable with edge devices, on-premise servers, or cloud computing farms.

The other rationale for taking this open source approach would be give developers access to a vast library of proven computer vision and machine learning software on which to build their source code. This would dramatically speed up software development and drive innovation, which would increase the value of all manufacturers’ cameras.

TRANSITIONING FROM VIDEO ANALYTICS FROM SECURITY TO BUSINESS OPERATIONS

Many of the video analytics developed for surveillance and security have, over time, found their way into business operations, especially retail and healthcare. For instance, loitering analytics are being used in stores to detect possible shoplifting or a customer needing help from service staff. In fact, some retailers are tying the video analytics into intelligent audio systems to trigger a message to the customer that assistance is on the way. This has proven to be a great deterrent against theft and lost sales opportunities.

In healthcare, some hospitals are using crossline detection analytics to trigger alerts when patients wandering or try getting out of bed without assistance. Some hospitals are augmenting their video analytics with audio analytics (such as aggression and gunshot detection) and public address systems to reduce workplace violence.

As result of the COVID-19 pandemic, many establishments are finding novel ways to employ their video analytics. Facial recognition software is being modified to detect whether people are wearing masks to ensure compliance with health and safety protocols. Occupancy analytics are deployed to alert management when the designated capacity is reached by current municipal codes. And many more innovations are in the pipeline.

CREATING A MULTI-TIERED VIDEO ANALYTICS SOLUTION

As you can see, video analytics has come a long way since simple pixel change detection. Software developers are designing them as multi-tiered solutions that can run at the edge, in the core and up in the cloud, giving users the flexibility to deploy and manage their analytics wherever they are best suited and most economical. They are using open sourced tools to construct these applications to be hardware agnostic, giving users the freedom to choose best of breed components for their installations.

Going forward, application developers will continue striving for analytics able to detect and evaluate ever more subtle nuances in behavior and the environment. This goal will be achieved by building on the legacy of their predecessors and harnessing the power of AI and machine learning. This will lead to more accurate and predictive performances that can help customers meet the daily challenges they face in their security and business operations.

This article originally appeared in the October 2020 issue of Security Today.

Featured

Featured Cybersecurity

Webinars

New Products

  • ResponderLink

    ResponderLink

    Shooter Detection Systems (SDS), an Alarm.com company and a global leader in gunshot detection solutions, has introduced ResponderLink, a groundbreaking new 911 notification service for gunshot events. ResponderLink completes the circle from detection to 911 notification to first responder awareness, giving law enforcement enhanced situational intelligence they urgently need to save lives. Integrating SDS’s proven gunshot detection system with Noonlight’s SendPolice platform, ResponderLink is the first solution to automatically deliver real-time gunshot detection data to 911 call centers and first responders. When shots are detected, the 911 dispatching center, also known as the Public Safety Answering Point or PSAP, is contacted based on the gunfire location, enabling faster initiation of life-saving emergency protocols. 3

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area. 3

  • Compact IP Video Intercom

    Viking’s X-205 Series of intercoms provide HD IP video and two-way voice communication - all wrapped up in an attractive compact chassis. 3