Drivers and Implications

Drivers and Implications

Innovations in hardware have bolstered compute power

Artificial Intelligence (AI) has been around since the 1950s when scientists and mathematicians essentially wanted to see if they could make machines think like humans. Since these early notions of AI, technology has advanced at a gradual rate. However, significant breakthroughs in AI have occurred within the last decade--accelerated by digitalization, which has resulted in more data to analyze and improved outcomes.

It is fair to say that as technology continues to advance, the impacts of AI will be experienced in every industry—particularly in the security industry—and offer unprecedented opportunity to address real-world challenges.

HARDWARE EQUALS MORE COMPUTE POWER

Most recently, innovations in hardware have bolstered compute power and generated more AI-related applications. Think about it: the transition from Central Processing Units (CPUs) to Graphics Processing Units (GPUs), and now Application Specific Integrated Circuits (ASICs), is well underway and rapidly evolving.

The shift from CPUs to GPUs resulted in efficiencies and advancements in parallel processing, and the transition to custom ASICs—specifically designed to accelerate AI techniques in Deep Learning (DL)—has opened the door for on-premise and edge device solutions. As a result, many industries are now starting to realize the significance of both hardware and software when applying AI to more real-world use cases.

From CPUs, GPUs and ASICs to DLPUs and SOCs (System on a Chip), AI is changing the way many device manufacturers are approaching future device design and functionality. Even though AI has been around for many decades, it’s recent advancements that have allowed the tech community to optimize the compute power required for AI and its techniques, including: • Machine Learning (ML) the subset of AI, which uses fundamental cognition and leverages algorithms to solve basic problems by identifying patterns to make highly confident predictions--resulting in decision making with minimal human interaction. • Deep Learning (DL) the subset ML that utilizes algorithms based on simulated neural networks inspired by the way humans learn (and trained on a massive amount of input data) in order to provide more accurate outcomes. • Neural Networks (NNET) or Artificial Neural Networks (ANNs) are the core of DL algorithms, whose structure is designed to simulate the way the human brain and its neurons operate in order to process and recognize relationships between data.

REAL-WORLD OPPORTUNITIES

So what is the next step for AI? The common goal is the commercialization of AI technology. The data required for AI begins at the edge with devices for collecting and processing that data into information.

Billions of devices interconnected in private and public networks are already in existence (and more are added to the network every day) which presents immense opportunity when it comes to the development of on-premise and edge-based commercial products. That said, in order to be successful, companies will need to adapt to the ever-evolving AI framework. The challenge for most companies is how to apply AI into a real-world environment in order to solve a problem. Furthermore, the ability to resolve real-world problems requires a lot of data—quality data.

The approach toward acquiring quality data must be methodical and meaningful, so it’s a walk before you can run process. Accordingly, in its initial stages, it requires an expert who can examine a problem, ask the right questions and get to the root of a problem before properly designing a solution around an AI framework. Of course the visual data in IP cameras is essential for AI to learn from. Once solid methodology is determined and quality visual data is collected, there is still a huge task to organize and label the data when applying ML and DL techniques. Compute power demands will increase especially when shifting from ML to DL techniques during the training process.

Once a ML/DL model is trained, and ready for execution, compute power at the edge also plays an important role. Deep Learning Processing Units (DLPUs) in today’s high-performance cameras are providing great advantages to the leap from Machine Learning to Deep Learning.

MODELLING, QUALITY DATA DRIVE RESULTS

It is important to bear in mind that Machine and Deep Learning require hundreds- of-thousands, if not millions of data sets to learn. Ultimately, the output in DL is only as good as the data that the algorithm is being taught. Training an AI model to correctly output an efficient result is tedious and requires a lot of human interaction to test and retest the results. In fact, real-world situations are essential to training, so these exercises cannot be performed in a vacuum. Public safety cameras are ideal inputs and offer valuable data since they provide varying perspectives, unique environments and new unstructured data sets that many existing AI models are not based upon.

While Machine Learning is efficient because its algorithms are good at analyzing structured data, it’s ineffective at processing unstructured data. Therefore, as AI looks to perform more complex analysis of unstructured data, Deep Learning with its algorithms based on simulated neural networks, is more capable. Visual data—including raw visual data in computer vision and encoded images or videos in JPEG and H.264/265—is unstructured data and incredibly valuable to Deep Learning. As we know, the Security Industry as a whole presents an abundance of visual data in real-world use cases—data that will undoubtably help drive advancements in Deep Learning over the next few years.

SETTING EXPECTATIONS

Despite the promising advancements in AI, it’s important to set expectations around what AI can and cannot do. For example, many analytics use image classification to detect people and vehicles, but that doesn’t equate to actually understanding a scene. Visual understanding is still very challenging and currently there is not enough real- world data and applicable training to allow an AI-based solution to fully understand a scene. Furthermore, the best AI-based analytics are not able to read a person’s behavior. Emotional differentiation such as humor is something that an AI-based solution cannot determine or infer. In a scene where crowds gather, AI-based analytics cannot understand if the event is an altercation or a celebration.

Clearly there are still some tough questions that face our industry when it comes to real-world applications and possible AIsolutions for our customers. For these reasons, analytics used in the security industry require some degree of human interaction and judgement. In addition to these considerations, vulnerabilities exist in data manipulation of neural networks, which can cause AI to output inaccurate results. For instance, you cannot fully understand a scene at the single pixel level, so there is still work to be done from a technological standpoint.

This fact can also be illustrated by the dynamic nature of images captured on an IP camera—in a scene where lighting is inconsistent, harsh shadows can cause changes in a per pixel level that affect the classification of an image or object. All that said, the community of AI developers is growing and they, in combination with their partners, are making great strides.

OPPORTUNITIES FOR TOMORROW

There is no doubt that image classification within security applications is evolving with AI. Moving from pixel-based algorithms in video motion detection to ML and DL models that can classify people and vehicles is a start. What’s more, a reduction in false positives can be attributed to the improvement of many DL models through real-world data.

Devices with a custom ASIC, DLPU or a SOC designed and optimized for DL will provide advantages at the edge. Edge devices with hardware acceleration for ML or DL will offer better performance and efficiencies. As AI becomes more mainstream, open-source projects will fuel the growth in edge-based processing along with some proprietary technologies around Deep Learning. For example, Google’s Tensor Processing Unit or TPU is an AI accelerator ASIC that was developed in 2015 specifically for NNET Machine Learning.

Google opened licensing availability of the TPU to third parties in 2018 to further advance the adoption of DL to other hardware manufacturers. Their Edge TPU was designed around a low power consumption draw of 2W compared to their server based TPUs. The Edge TPU in its current generation can process 4 trillion operations per second and offers an alternative to GPU accelerated Machine Learning. This is just one example of the innovations in DL hardware acceleration that can lead to breakthroughs in AI and edge compute devices that are processing images in real-time. The future for DL on edge devices will be dependent on how ef- ficient an ASIC, DLPU, or SOC design is implemented.

REDEFINING THE FUTURE

Artificial intelligence has already begun to impact the security industry, and it has promising and exciting implications. Intelligence is transitioning to a distributed architecture that impacts edge devices directly where data is collected. Increasingly, more AI-experienced companies are collaborating with customers and partners in our industry. Many companies are investing and exploring AI-centric solutions and are looking for partners to work with in the process. AI-based solutions in our industry will not be a one size fit all and will require a team well-versed in AI frameworks.

These teams must be willing to challenge conventions and ask hard questions in order to get to the root of a problem before architecting a solution around AI. With recent advancements and new opportunities, there’s no doubt that innovations in AI will grow exponentially in the coming years—and these innovations will transform our industry and redefine the future of public safety, operational efficiency and business intelligence.

This article originally appeared in the September / October 2021 issue of Security Today.

Featured

  • Gaining a Competitive Edge

    Ask most companies about their future technology plans and the answers will most likely include AI. Then ask how they plan to deploy it, and that is where the responses may start to vary. Every company has unique surveillance requirements that are based on market focus, scale, scope, risk tolerance, geographic area and, of course, budget. Those factors all play a role in deciding how to configure a surveillance system, and how to effectively implement technologies like AI. Read Now

  • 6 Ways Security Awareness Training Empowers Human Risk Management

    Organizations are realizing that their greatest vulnerability often comes from within – their own people. Human error remains a significant factor in cybersecurity breaches, making it imperative for organizations to address human risk effectively. As a result, security awareness training (SAT) has emerged as a cornerstone in this endeavor because it offers a multifaceted approach to managing human risk. Read Now

  • The Stage is Set

    The security industry spans the entire globe, with manufacturers, developers and suppliers on every continent (well, almost—sorry, Antarctica). That means when regulations pop up in one area, they often have a ripple effect that impacts the entire supply chain. Recent data privacy regulations like GDPR in Europe and CPRA in California made waves when they first went into effect, forcing businesses to change the way they approach data collection and storage to continue operating in those markets. Even highly specific regulations like the U.S.’s National Defense Authorization Act (NDAA) can have international reverberations – and this growing volume of legislation has continued to affect global supply chains in a variety of different ways. Read Now

  • Access Control Technology

    As we move swiftly toward the end of 2024, the security industry is looking at the trends in play, what might be on the horizon, and how they will impact business opportunities and projections. Read Now

Featured Cybersecurity

Webinars

New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area. 3

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities 3

  • Camden CV-7600 High Security Card Readers

    Camden CV-7600 High Security Card Readers

    Camden Door Controls has relaunched its CV-7600 card readers in response to growing market demand for a more secure alternative to standard proximity credentials that can be easily cloned. CV-7600 readers support MIFARE DESFire EV1 & EV2 encryption technology credentials, making them virtually clone-proof and highly secure. 3