AI on the Edge

Are AI-based analytics best processed in the cloud, on the edge or on a dedicated server? The answer Is “It depends.”

Discussions about the merits (or misgivings) around AI (artificial intelligence) are everywhere. In fact, you’d be hard-pressed to find an article or product literature without mention of it in our industry. If you’re not using AI by now in some capacity, congratulations may be in order since most people are using it in some form daily even without realizing it.

When it comes to security, you have probably heard that AI is here to stay. It is the perfect assistant to security teams that cannot possibly watch all the video streams being generated by an organization 24/7/365. And it is certainly the only thing that can stay awake doing it. When we think of AI in physical security cameras, we mainly think about its ability to recognize and describe known objects such as people and vehicles.

That ability to recognize and describe unique attributes about an object such as a person’s shoe color, and whether they are carrying a bag or wearing a hat is extremely valuable to inform our analytics. The analytics algorithms benefit greatly the more a smart camera can tell them about the characteristics of the person who is standing outside a loading dock at 4 a.m. It is this marriage of AI-based object recognition and analytics that is revolutionizing our industry by helping security teams be more proactive to potential threats versus simply reacting to events that already happened.

AI will soon be commonplace in most every surveillance camera model offered for the simple reason that it makes such cameras smart, IoT devices. It’s a value-added feature that we’ll soon wonder how we ever lived without it since there are more cameras deployed than can be possibly monitored by human operators.

Not all AI is equal however, because there are different methods and models available to sort through the information that is harvested. One of the biggest differences is where the AI processing is done. Is it in the camera itself (also known as the edge) or is it on a server on premises? Maybe it is not on site at all and is being processed in the cloud? Where the information is processed can have a significant impact on the type of results obtained and the speed in which those results are available.

Edge, Cloud or Dedicated Servers? That is the question.
Running AI on the edge, in the cloud, or on dedicated servers each have their own set of advantages and considerations. Choosing between these options depends on the specific use case, processing requirements, and limitations of the infrastructure used to transport the data.

For example, it might be OK to send all your video streams to the cloud to run AI analytics for 10 cameras, but what about 100 or 500 cameras? With more raw video feeds travelling over the wide area network, the costlier it is going to be in terms of bandwidth and server load. Of course, the cloud is known for its scalability, but decompressing a compressed stream of video and running it through AI-based analytics all takes time which can lead to latency and delays when you need to react quickly to an important event where seconds count.

Benefits of Edge-based AI Analytics
Low latency. Being able to run AI-based analytics on the edge means analyzing footage the moment it hits the sensor, potentially even before it is compressed to a format like H.264 and sent to a VMS as a video stream. There is no faster way to detect a person or a vehicle and describe the behavior and attributes than doing it on the edge. If you have real-time applications where quick, proactive decision-making is necessary, processing on the edge is the answer. However, if we are only analyzing video footage post event, then the delays inherent to cloud-based analytics might acceptable.

Privacy and data security. Edge computing keeps sensitive data localized, enhancing privacy by minimizing the need to send data to external servers for processing. For example, it might not be legal to record audio along with the video surveillance in certain environments. Sound analytics can instantly notify operators of glass breaks, gun shots, and yells without recording any audio along with the video stream.

Bandwidth efficiency As mentioned previously, processing data on the edge reduces the amount of data that needs to be transmitted to the cloud. Since the amount of data increases rapidly as cameras are added, edge-based analytics can be especially beneficial in scenarios where network bandwidth is limited or expensive.

Offline capabilities. Edge devices can continue functioning even when they are disconnected from the cloud. This is important in situations where a reliable internet connection cannot be guaranteed, such as in remote areas or during network outages.

Regulatory compliance. Some industries, like healthcare or finance, have strict regulations regarding data privacy and residency. Running AI on the edge can help organizations comply with these regulations by keeping data within certain geographical boundaries.

Enhanced reliability. As edge-based processing evolves, distributed edge architectures can enhance system reliability. Even if one edge device fails, others can continue to operate independently, reducing the risk of complete system failures.

The Case for the Cloud
It is important to acknowledge that there are also challenges to consider when deploying AI on the edge, such as limited computational resources, the potential difficulty in maintaining and updating edge devices, and the need to manage and secure a network of IoT-style devices.

Cloud-based and dedicated server solutions offer advantages like scalability, centralized management, and access to powerful hardware, making them well-suited for applications that require extensive computational resources and where low latency is not a critical factor.

The Case for Hybrid Deployments
Using the edge for AI-based object detection and attribute harvesting cannot be beat, but when it comes to comparing that data for use in business and operational intelligence analysis, we frequently need more power.

Hybrid deployments can represent the best of both worlds since edge AI processing can send the lightweight, low bandwidth, resultant data to a dedicated server or cloud-based compute engine for further processing and comparisons to existing databases of information. In this way, hybrid edge/cloud/server deployments represent a powerful combination with no compromises when it comes to crunching big data and finding trends.

Let Your Unique Security Needs Dictate How You Use AI
Ultimately, the decision between edge, cloud, or hybrid deployments depends on factors like your unique latency requirements for real-time alerts, data privacy concerns, available network bandwidth, and the trade-offs between processing power and cost.

One answer seems common to all use cases: at minimum, use edge AI processing as much as possible. If more AI processing is required, consider sending the lighter weight results from the edge to a dedicated server in the cloud or on the ground. Edge-based computing will only get more powerful, but there will always be a limit to how much information the edge can hold when crunching through piles of big data.

Let your unique requirements be your guide.

This article originally appeared in the November / December 2023 issue of Security Today.


  • ASIS International and SIA Release “Complexities in the Global Security Market: 2024 Through 2026”

    ASIS International and the Security Industry Association (SIA) – the leading security associations for the security industry – have released ”Complexities in the Global Security Market: 2024 Through 2026”, a new research report that provides insights into the equipment, technologies, and employment of the global security industry, including regional market breakouts. SIA and ASIS partnered with global analytics and advisory firm Omdia to complete the research. Read Now

  • President Biden Issues Executive Order to Bolster U.S Port Cybersecurity

    On Wednesday, President Biden issued an Executive Order to bolster the security of the nation’s ports, alongside a series of additional actions that will strengthen maritime cybersecurity and more Read Now

  • Report: 15 Percent of All Emails Sent in 2023 Were Malicious

    VIPRE Security Group recently released its report titled “Email Security in 2024: An Expert Look at Email-Based Threats”. The 2024 predictions for email security in this report are based on an analysis of over 7 billion emails processed by VIPRE worldwide during 2023. This equates to almost one email for everyone on the planet. Of those, roughly 1 billion (or 15%) were malicious. Read Now

  • ASIS Announces ANSI-Approved Cannabis Security Standard

    ASIS International, a leading authority in security standards and guidelines, proudly announces the release of a pioneering American National Standards Institute (ANSI)-approved standard dedicated to cannabis security. This best-in-class standard, meticulously developed by industry experts, sets a new benchmark by providing comprehensive requirements and guidance for the design, implementation, monitoring, evaluation, and maintenance of a cannabis security program. Read Now

Featured Cybersecurity


New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area. 3

  • Camden CM-221 Series Switches

    Camden CM-221 Series Switches

    Camden Door Controls is pleased to announce that, in response to soaring customer demand, it has expanded its range of ValueWave™ no-touch switches to include a narrow (slimline) version with manual override. This override button is designed to provide additional assurance that the request to exit switch will open a door, even if the no-touch sensor fails to operate. This new slimline switch also features a heavy gauge stainless steel faceplate, a red/green illuminated light ring, and is IP65 rated, making it ideal for indoor or outdoor use as part of an automatic door or access control system. ValueWave™ no-touch switches are designed for easy installation and trouble-free service in high traffic applications. In addition to this narrow version, the CM-221 & CM-222 Series switches are available in a range of other models with single and double gang heavy-gauge stainless steel faceplates and include illuminated light rings. 3

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.” 3