Driving New Opportunities

Driving New Opportunities

The advantages of greater processing power at the edge and greater scalability in the cloud

When network video cameras first came on the market, they were chiefly bare bones video streaming devices. Most of the intelligence and processing for the system was housed in the core server farm of the video management system. Within a few years, however, companies were manufacturing cameras with enough CPU power to perform simple analytics at the edge. As computing power continued to increase, so did the opportunity for companies to embed ever more sophisticated analytics in-camera.

There were several benefits that made edge-based analytics appealing:

Lower bandwidth consumption. Instead of streaming every frame of raw video to the server for analysis, the camera could pre-process the images and just send the event footage.

Lower storage requirements. With only content-rich video being sent to the server, there would be less footage to archive in the storage array.

Lower operating costs. Processing video in-camera was less expensive than monopolizing CPU cycles on the server.

The earliest algorithms brought into the camera were based on pixel changes in the field of view. When the changes reached a certain threshold, the analytics would conclude that motion had been detected and would send the video to the server. Building on that pixel threshold concept, other in-camera analytics like camera tampering and crossline detection soon followed.


Fast forward to 2020. Manufacturers are building cameras embedded with deep learning processing units (DLPUs), enabling software developers to integrate arti- ficial intelligence (AI) into their video analytics algorithms. This has raised new hopes that machine learning and deep learning will be the silver bullet that the security industry has long been promising. Given the variabilities of surveillance environments, however, fulfilling that promise still has a way to go. That is because machine learning can consume an enormous amount of resources before a consistently accurate result can be achieved.

We’ll use the example of facial recognition. If you wanted to create the application using AI, you would need an iterative process to train the program to classify an image as a face. That would mean collecting and labeling thousands of images of facing and feeding them into the program, then testing the application after each cycle of input until you determined that the program had learned “enough” about what characteristics comprise a face. At that point, the trained model would become the finished program. But after that, the AI wouldn’t learn anything new.

Now consider the challenges of facial recognition from a surveillance camera perspective. Not only do you have to train the program to recognize a full-frontal image, but images captured from multiple angles, images in shadow and bright sunlight and variable weather conditions, images with facial hair, hats, glasses, tattoos and other distinguishing differences. And if the application comes across a novel image for which it has no data points it can reference it could fail to recognize the image as a face.

That’s not to say significant strides have been made since the early days of video analytics. Take, for example, video motion detection. We’ve come a long way from simply detecting pixelchanges in the scene. Today’s motion detection analytics have been designed to recognize patterns. They’re able to filter out non-essential data like shadows, passing objects like cars and branches, the bloom from a headlight, even birds – which leads to significantly fewer false positive alerts.

Other video analytics such as license plate recognition and object classification (like type of car, color, make and model), have also grown in sophistication over the years with the ability to accurately discern and transmit essential data and ignore anything irrelevant to the specific task at hand.


The video analytics industry has burgeoned into a massive ecosystem of problem-solving tools. But to achieve more predictive intelligence, many of these algorithms rely on larger datasets of information and greater processing power to reach an acceptable level of accuracy. This has led many businesses to realize that computing power and datasets at the edge are the core and insufficient to the task. So, they’re turning to a third option for their analytics operations: cloud computing. Using a cloud computing service offers certain advantages that neither the edge nor the core can provide:

Great scalability. Going to a cloud computing model offers almost unlimited processing power and gives users access to large data sets and images to train video analytics algorithms to targeted tasks.

Great fiexibility. Cloud computing service is an elastic solution. Businesses only use provider resources on an as needed basis.

Lower upfront investment. Businesses don’t have to purchase, maintain and update local server resources, which makes it possible for companies with fewer financial means to access virtually unlimited advanced hardware and software resources without a huge capital investment. They can employ video analytics as a service and allocate the expense to their operating budget.


In addition to ever greater accuracy, one of the reasons that video analytics are gaining traction is that many of the newer algorithms are hardware agnostic. In the beginning, manufacturers only allowed analytics created by their own in-house software development team to be embedded on their cameras. As the demand for customized solutions grew, manufacturers gradually began opening their products to third-party developers. But, there was a caveat. For the applications to run on those cameras, these outside developers had to use the manufacturer’s own proprietary application development tools and platform. With few exceptions, this generally constrained an application’s usefulness to a single manufacturer’s product line.

With the rise in the Internet of Things and best-of-breed, mixedvendor ecosystems, this position was no longer sustainable as it was limiting users’ ability to grow their systems. Today there’s a big push for open source development tools based on industry standard application programming interfaces. The goal would be to create a common development framework that would support deploying the video analytic to multiple tiers. In other words, any analytics software written within this framework would be interoperable with edge devices, on-premise servers, or cloud computing farms.

The other rationale for taking this open source approach would be give developers access to a vast library of proven computer vision and machine learning software on which to build their source code. This would dramatically speed up software development and drive innovation, which would increase the value of all manufacturers’ cameras.


Many of the video analytics developed for surveillance and security have, over time, found their way into business operations, especially retail and healthcare. For instance, loitering analytics are being used in stores to detect possible shoplifting or a customer needing help from service staff. In fact, some retailers are tying the video analytics into intelligent audio systems to trigger a message to the customer that assistance is on the way. This has proven to be a great deterrent against theft and lost sales opportunities.

In healthcare, some hospitals are using crossline detection analytics to trigger alerts when patients wandering or try getting out of bed without assistance. Some hospitals are augmenting their video analytics with audio analytics (such as aggression and gunshot detection) and public address systems to reduce workplace violence.

As result of the COVID-19 pandemic, many establishments are finding novel ways to employ their video analytics. Facial recognition software is being modified to detect whether people are wearing masks to ensure compliance with health and safety protocols. Occupancy analytics are deployed to alert management when the designated capacity is reached by current municipal codes. And many more innovations are in the pipeline.


As you can see, video analytics has come a long way since simple pixel change detection. Software developers are designing them as multi-tiered solutions that can run at the edge, in the core and up in the cloud, giving users the flexibility to deploy and manage their analytics wherever they are best suited and most economical. They are using open sourced tools to construct these applications to be hardware agnostic, giving users the freedom to choose best of breed components for their installations.

Going forward, application developers will continue striving for analytics able to detect and evaluate ever more subtle nuances in behavior and the environment. This goal will be achieved by building on the legacy of their predecessors and harnessing the power of AI and machine learning. This will lead to more accurate and predictive performances that can help customers meet the daily challenges they face in their security and business operations.

This article originally appeared in the October 2020 issue of Security Today.


  • Survey: Less Than Half of IT Leaders are Confident in their IoT Security Plans

    Viakoo recently released findings from its 2024 IoT Security Crisis: By the Numbers. The survey uncovers insights from IT and security executives, exposes a dramatic surge in enterprise IoT security risks, and highlights a critical missing piece in the IoT security technology stack. The clarion call is clear: IT leaders urgently need to secure their IoT infrastructure one application at a time in an automated and expeditious fashion. Read Now

  • ASIS International and SIA Release “Complexities in the Global Security Market: 2024 Through 2026”

    ASIS International and the Security Industry Association (SIA) – the leading security associations for the security industry – have released ”Complexities in the Global Security Market: 2024 Through 2026”, a new research report that provides insights into the equipment, technologies, and employment of the global security industry, including regional market breakouts. SIA and ASIS partnered with global analytics and advisory firm Omdia to complete the research. Read Now

  • President Biden Issues Executive Order to Bolster U.S Port Cybersecurity

    On Wednesday, President Biden issued an Executive Order to bolster the security of the nation’s ports, alongside a series of additional actions that will strengthen maritime cybersecurity and more Read Now

  • Report: 15 Percent of All Emails Sent in 2023 Were Malicious

    VIPRE Security Group recently released its report titled “Email Security in 2024: An Expert Look at Email-Based Threats”. The 2024 predictions for email security in this report are based on an analysis of over 7 billion emails processed by VIPRE worldwide during 2023. This equates to almost one email for everyone on the planet. Of those, roughly 1 billion (or 15%) were malicious. Read Now

Featured Cybersecurity


New Products

  • EasyGate SPT and SPD

    EasyGate SPT SPD

    Security solutions do not have to be ordinary, let alone unattractive. Having renewed their best-selling speed gates, Cominfo has once again demonstrated their Art of Security philosophy in practice — and confirmed their position as an industry-leading manufacturers of premium speed gates and turnstiles. 3

  • ResponderLink


    Shooter Detection Systems (SDS), an Alarm.com company and a global leader in gunshot detection solutions, has introduced ResponderLink, a groundbreaking new 911 notification service for gunshot events. ResponderLink completes the circle from detection to 911 notification to first responder awareness, giving law enforcement enhanced situational intelligence they urgently need to save lives. Integrating SDS’s proven gunshot detection system with Noonlight’s SendPolice platform, ResponderLink is the first solution to automatically deliver real-time gunshot detection data to 911 call centers and first responders. When shots are detected, the 911 dispatching center, also known as the Public Safety Answering Point or PSAP, is contacted based on the gunfire location, enabling faster initiation of life-saving emergency protocols. 3

  • Hanwha QNO-7012R

    Hanwha QNO-7012R

    The Q Series cameras are equipped with an Open Platform chipset for easy and seamless integration with third-party systems and solutions, and analog video output (CVBS) support for easy camera positioning during installation. A suite of on-board intelligent video analytics covers tampering, directional/virtual line detection, defocus detection, enter/exit, and motion detection. 3