Clearing Up Confusion

Taking a moment to clear up any misconceptions about AI

The confusion I hear in the industry starts with the definitions of these terms: artificial intelligence (AI), machine learning (deep and shallow), and analytics. Some believe these things are all the same and use them interchangeably. On the other end of the spectrum you have those using the terms accurately - and you have everything in between.

This creates market confusion even down to the one-on-one conversation level. So, for clarity:

  • AI (artificial intelligence) at its most basic is the ability for a machine to learn on its own;
  • Machine learning typically references how the AI is being applied (shallow/deep evaluation of data at different levels); and
  • Analytics are typically a catch-all word for the results that are presented back to the user (and this is also used with non-AI related analytics).

The most basic definition of AI is the ability for a machine to learn on its own. The expectations are that it would provide actionable results and potentially even take intelligent action based on those results.

Manufacturers in our industry are fairly astute and aware of AI, its subtleties, and the applications thereof. After all, they need to be thinking not about technology today, but innovating the technology of tomorrow. Milestone believes intelligence has a huge role in that.

So, What Does AI NOT Mean?

It is easy to jump on the bandwagon and consider anything “smart” as AI. To date, in our industry, most analytics are smart, not intelligent— meaning they can analyze video and conclude some fairly amazing things. However, most are simply algorithmic and not necessarily learning anything new over time.

The relationship to AI of deep learning and machine learning illustrates how these are different. Machine learning usually references how the AI is being applied (shallow/deep evaluation of data/levels). Shallow and deep learning are the mechanisms by which machine learning takes place.

Due to processing limitations, learning has typically taken place in a “shallow” way (i.e. by looking at only a few levels or dimensions). However, with the significant advances in processing power gained through the development of graphical processing units (GPUs), we can now look at data in a “deep” way (i.e. by looking at many more levels).

I think that what is reasonable to expect AI to accomplish in its applications is augmentation. It will be quite some time before AI has the potential to replace the capabilities of the average security industry end user. The more likely scenario is that AI will be leveraged to process much more data in much less time, empowering end users to make much better decisions more quickly.

In a post-event scenario (i.e. investigative forensics), speed has the potential to matter substantially. In a pre-event scenario (i.e. prevention), more data from more sources provides more intelligent decision-making regarding potential events. We still need people in the AI equation; we just want to give them better data with which to make their decisions.

AI can definitely enhance traditional security through augmentation of the tasks at hand: more data from more sources enables more intelligent decision making.

Is it Really a Learning Situation?

For those who are looking to implement AI, there are some things they should be aware of.

There is currently a trend regarding AI-driven solutions, where people typically think of them as ones that analyze video (either live or recorded) and learn over time, with the result that the system becomes more accurate and better in its assessment. However, that constant learning is not necessarily the case for many solutions today, so it’s important to truly understand the application of AI in the solution you are evaluating.

There are few truly AI-at-deployment solutions in the security industry today. Many solutions are “AI-trained”, meaning that back in the lab their algorithms are trained using AI capabilities, but once that algorithm is developed, it is deployed as just a smart algorithm and there is no further learning occurring. The only time these algorithms will improve is when they are updated to include improved learning. There are cloud-based AI solutions today that can be leveraged for augmenting your security solutions. And as time moves forward, cloud seems to be part of most people’s conversations when it comes to analytic processing (whether AI driven or not). In the cloud, there is a consumption-cost model built around processing, so using the cloud versus local servers comes down to a decision of ROI based on length of time/usage.

This article originally appeared in the April 2019 issue of Security Today.

About the Author

Brad Eck is a strategic alliances program owner – Americas, Milestone Systems.

  • Ahead of Current Events Ahead of Current Events

    In this episode, Ralph C. Jensen chats with Dana Barnes, president of global government at Dataminr. We talk about the evolution of Dataminr and how data software benefits business and personnel alike. The Dataminr mission is to keep subscribers up-to-date on worldwide events in case of employee travel. Barnes recites Dataminr history and how their platform works. With so much emphasis on cybersecurity, Barnes goes into detail about his cybersecurity background and the measures Dataminr takes to ensure safe and secure implementation.

Digital Edition

  • Environmental Protection
  • Occupational Health & Safety
  • Spaces4Learning
  • Campus Security & Life Safety