Drivers and Implications

Drivers and Implications

Innovations in hardware have bolstered compute power

Artificial Intelligence (AI) has been around since the 1950s when scientists and mathematicians essentially wanted to see if they could make machines think like humans. Since these early notions of AI, technology has advanced at a gradual rate. However, significant breakthroughs in AI have occurred within the last decade--accelerated by digitalization, which has resulted in more data to analyze and improved outcomes.

It is fair to say that as technology continues to advance, the impacts of AI will be experienced in every industry—particularly in the security industry—and offer unprecedented opportunity to address real-world challenges.

HARDWARE EQUALS MORE COMPUTE POWER

Most recently, innovations in hardware have bolstered compute power and generated more AI-related applications. Think about it: the transition from Central Processing Units (CPUs) to Graphics Processing Units (GPUs), and now Application Specific Integrated Circuits (ASICs), is well underway and rapidly evolving.

The shift from CPUs to GPUs resulted in efficiencies and advancements in parallel processing, and the transition to custom ASICs—specifically designed to accelerate AI techniques in Deep Learning (DL)—has opened the door for on-premise and edge device solutions. As a result, many industries are now starting to realize the significance of both hardware and software when applying AI to more real-world use cases.

From CPUs, GPUs and ASICs to DLPUs and SOCs (System on a Chip), AI is changing the way many device manufacturers are approaching future device design and functionality. Even though AI has been around for many decades, it’s recent advancements that have allowed the tech community to optimize the compute power required for AI and its techniques, including: • Machine Learning (ML) the subset of AI, which uses fundamental cognition and leverages algorithms to solve basic problems by identifying patterns to make highly confident predictions--resulting in decision making with minimal human interaction. • Deep Learning (DL) the subset ML that utilizes algorithms based on simulated neural networks inspired by the way humans learn (and trained on a massive amount of input data) in order to provide more accurate outcomes. • Neural Networks (NNET) or Artificial Neural Networks (ANNs) are the core of DL algorithms, whose structure is designed to simulate the way the human brain and its neurons operate in order to process and recognize relationships between data.

REAL-WORLD OPPORTUNITIES

So what is the next step for AI? The common goal is the commercialization of AI technology. The data required for AI begins at the edge with devices for collecting and processing that data into information.

Billions of devices interconnected in private and public networks are already in existence (and more are added to the network every day) which presents immense opportunity when it comes to the development of on-premise and edge-based commercial products. That said, in order to be successful, companies will need to adapt to the ever-evolving AI framework. The challenge for most companies is how to apply AI into a real-world environment in order to solve a problem. Furthermore, the ability to resolve real-world problems requires a lot of data—quality data.

The approach toward acquiring quality data must be methodical and meaningful, so it’s a walk before you can run process. Accordingly, in its initial stages, it requires an expert who can examine a problem, ask the right questions and get to the root of a problem before properly designing a solution around an AI framework. Of course the visual data in IP cameras is essential for AI to learn from. Once solid methodology is determined and quality visual data is collected, there is still a huge task to organize and label the data when applying ML and DL techniques. Compute power demands will increase especially when shifting from ML to DL techniques during the training process.

Once a ML/DL model is trained, and ready for execution, compute power at the edge also plays an important role. Deep Learning Processing Units (DLPUs) in today’s high-performance cameras are providing great advantages to the leap from Machine Learning to Deep Learning.

MODELLING, QUALITY DATA DRIVE RESULTS

It is important to bear in mind that Machine and Deep Learning require hundreds- of-thousands, if not millions of data sets to learn. Ultimately, the output in DL is only as good as the data that the algorithm is being taught. Training an AI model to correctly output an efficient result is tedious and requires a lot of human interaction to test and retest the results. In fact, real-world situations are essential to training, so these exercises cannot be performed in a vacuum. Public safety cameras are ideal inputs and offer valuable data since they provide varying perspectives, unique environments and new unstructured data sets that many existing AI models are not based upon.

While Machine Learning is efficient because its algorithms are good at analyzing structured data, it’s ineffective at processing unstructured data. Therefore, as AI looks to perform more complex analysis of unstructured data, Deep Learning with its algorithms based on simulated neural networks, is more capable. Visual data—including raw visual data in computer vision and encoded images or videos in JPEG and H.264/265—is unstructured data and incredibly valuable to Deep Learning. As we know, the Security Industry as a whole presents an abundance of visual data in real-world use cases—data that will undoubtably help drive advancements in Deep Learning over the next few years.

SETTING EXPECTATIONS

Despite the promising advancements in AI, it’s important to set expectations around what AI can and cannot do. For example, many analytics use image classification to detect people and vehicles, but that doesn’t equate to actually understanding a scene. Visual understanding is still very challenging and currently there is not enough real- world data and applicable training to allow an AI-based solution to fully understand a scene. Furthermore, the best AI-based analytics are not able to read a person’s behavior. Emotional differentiation such as humor is something that an AI-based solution cannot determine or infer. In a scene where crowds gather, AI-based analytics cannot understand if the event is an altercation or a celebration.

Clearly there are still some tough questions that face our industry when it comes to real-world applications and possible AIsolutions for our customers. For these reasons, analytics used in the security industry require some degree of human interaction and judgement. In addition to these considerations, vulnerabilities exist in data manipulation of neural networks, which can cause AI to output inaccurate results. For instance, you cannot fully understand a scene at the single pixel level, so there is still work to be done from a technological standpoint.

This fact can also be illustrated by the dynamic nature of images captured on an IP camera—in a scene where lighting is inconsistent, harsh shadows can cause changes in a per pixel level that affect the classification of an image or object. All that said, the community of AI developers is growing and they, in combination with their partners, are making great strides.

OPPORTUNITIES FOR TOMORROW

There is no doubt that image classification within security applications is evolving with AI. Moving from pixel-based algorithms in video motion detection to ML and DL models that can classify people and vehicles is a start. What’s more, a reduction in false positives can be attributed to the improvement of many DL models through real-world data.

Devices with a custom ASIC, DLPU or a SOC designed and optimized for DL will provide advantages at the edge. Edge devices with hardware acceleration for ML or DL will offer better performance and efficiencies. As AI becomes more mainstream, open-source projects will fuel the growth in edge-based processing along with some proprietary technologies around Deep Learning. For example, Google’s Tensor Processing Unit or TPU is an AI accelerator ASIC that was developed in 2015 specifically for NNET Machine Learning.

Google opened licensing availability of the TPU to third parties in 2018 to further advance the adoption of DL to other hardware manufacturers. Their Edge TPU was designed around a low power consumption draw of 2W compared to their server based TPUs. The Edge TPU in its current generation can process 4 trillion operations per second and offers an alternative to GPU accelerated Machine Learning. This is just one example of the innovations in DL hardware acceleration that can lead to breakthroughs in AI and edge compute devices that are processing images in real-time. The future for DL on edge devices will be dependent on how ef- ficient an ASIC, DLPU, or SOC design is implemented.

REDEFINING THE FUTURE

Artificial intelligence has already begun to impact the security industry, and it has promising and exciting implications. Intelligence is transitioning to a distributed architecture that impacts edge devices directly where data is collected. Increasingly, more AI-experienced companies are collaborating with customers and partners in our industry. Many companies are investing and exploring AI-centric solutions and are looking for partners to work with in the process. AI-based solutions in our industry will not be a one size fit all and will require a team well-versed in AI frameworks.

These teams must be willing to challenge conventions and ask hard questions in order to get to the root of a problem before architecting a solution around AI. With recent advancements and new opportunities, there’s no doubt that innovations in AI will grow exponentially in the coming years—and these innovations will transform our industry and redefine the future of public safety, operational efficiency and business intelligence.

This article originally appeared in the September / October 2021 issue of Security Today.

Featured

  • ESX 2025 Announces Expanded Schedule of Events

    ESX has announced its dynamic 2025 schedule, set to provide an unparalleled experience for professionals in the electronic security and life safety industry. Taking place June 16-19 at the Cobb Galleria Centre, this year’s event features an expanded lineup of educational sessions, hands-on workshops, inspiring main stage speakers, networking opportunities, and an engaging expo floor showcasing the latest technology. Read Now

  • City of New Orleans Launches NOLA Ready Public Safety App Before Super Bowl

    The City of New Orleans Office of Homeland Security and Emergency Preparedness (NOHSEP) is pleased to announce the official launch of the NOLA Ready Public Safety App, powered by Motorola Solutions. This new mobile application is designed to enhance public safety and emergency preparedness for both residents and visitors. All individuals planning to attend major events in New Orleans, including the Super Bowl, Mardi Gras, and other large gatherings, are encouraged to download the app. Read Now

  • 5 Tips to Improve Your Password Security

    Change Your Password Day is right around the corner. Observed every year on February 1, the day aims to raise awareness about cybersecurity and underscores the importance of keeping passwords strong and up to date. Read Now

  • Enhancing Port Security

    DP World Yarimca, one of the largest container terminals of the Gulf of İzmit and Turkey, is a strong proponent of using industry-leading technology to deliver unrivaled value to its customers and partners. As the port is growing, DP World Yarimca needs to continue to provide uninterrupted operations and a high level of security.To address these challenges, DP World Yarimca has embraced innovative technological products, including FLIR's comprehensive portfolio of security monitoring solutions. Read Now

New Products

  • Hanwha QNO-7012R

    Hanwha QNO-7012R

    The Q Series cameras are equipped with an Open Platform chipset for easy and seamless integration with third-party systems and solutions, and analog video output (CVBS) support for easy camera positioning during installation. A suite of on-board intelligent video analytics covers tampering, directional/virtual line detection, defocus detection, enter/exit, and motion detection.

  • ComNet CNGE6FX2TX4PoE

    The ComNet cost-efficient CNGE6FX2TX4PoE is a six-port switch that offers four Gbps TX ports that support the IEEE802.3at standard and provide up to 30 watts of PoE to PDs. It also has a dedicated FX/TX combination port as well as a single FX SFP to act as an additional port or an uplink port, giving the user additional options in managing network traffic. The CNGE6FX2TX4PoE is designed for use in unconditioned environments and typically used in perimeter surveillance.

  • Camden CV-7600 High Security Card Readers

    Camden CV-7600 High Security Card Readers

    Camden Door Controls has relaunched its CV-7600 card readers in response to growing market demand for a more secure alternative to standard proximity credentials that can be easily cloned. CV-7600 readers support MIFARE DESFire EV1 & EV2 encryption technology credentials, making them virtually clone-proof and highly secure.