On the Fast Track
Data has been with man since the dawn of time
- By Brian McIlravey
- Feb 01, 2015
Today’s integrated systems are generating
more data than ever before. For some
perspective on just how much data we’re
talking about, consider that from the dawn
of civilization to 2003, humankind created
two exabytes of data. Since 2012, it’s
estimated that five exabytes of data are
being generated every single day by internet-connected
devices and this number continues to grow. In addition to
computers, tablets and smartphones, we’re also surrounded
by a wide variety of other Internet-connected objects
including refrigerators that can alert us when we’re almost
out of milk or cars that can send an email when windshield
washer fluid is low.
These are just a few examples of the
expanding number of objects in our daily
lives that are equipped with an IP address
and integrated sensors that allow them to
communicate. Technologies like GPS and
RFID also help to connect these objects in
an expanding network known as the Internet
of Things (IoT).
Security technologies also are part of
the IoT and contribute to the amount of
data that’s created. A typical VMS, for example,
may record 200 frames per second
of surveillance video, while a large, distributed
access control system may record
thousands of transactions per minute during
certain times of day. This is excellent
data to have available, but the challenge
is the tremendous quantity of data to be
processed. In some ways, the term big data
doesn’t come close to describing just how
much is out there.
For an incident management system,
the key is sifting through all the data
produced by these connections and identifying
the most relevant information for
analysis. However, given the almost incomprehensible
amount of data that’s
created every day, that task of identifying,
extracting, and analyzing the right data is
a tremendous challenge.
In addition to massive data sets, how
systems integrate with each other has also
changed drastically in recent years. At one
time, the paradigm was that one system
would connect to another in a linear fashion
to facilitate a simple data exchange.
Today, these integrations are much
more complex and flow in multiple directions
simultaneously as systems constantly
communicate and share data with each
other. This new data model can be termed
HV3, which stands for high volume, high
velocity, and high variety. All these systems
and source points are generating huge
numbers of transactions and other data
creation events, which are happening very
fast on a constant basis. As more and more
devices are connected to the internet, those
five exabytes of data we currently generate
per day will only continue to grow.
It’s important to note that data is not
information. Whether big or small, data
is simply binary. In order to understand
it and make knowledge-based decisions,
data needs to be extracted, analyzed and
visualized to solve a puzzle. Under the linear
paradigm, data produced by machines
and systems would be seen and evaluated
by a human, who would draw conclusions
based on what they saw and understood
within the data. This meant the greatest
challenge was to identify and collect relevant
data for human analysis. What hasn’t
changed is security practitioners’ need to
proactively analyze information streams
to detect, prevent, and solve issues. That is
made more difficult with the HV3 model
because there is so much data available
that it’s no longer possible to process it using
traditional methods and applications.
Fortunately, there are innovative new tools
available that more effectively and efficiently
extract and analyze incident-related
data and turn it into usable intelligence
to help organizations predict vulnerabilities
to mitigate or eliminate threats.
During the response and recovery
phase, an organization collects data about
an incident from multiple systems and
sources and funnels it into an incident
management solution, where it is analyzed
for indicators or anomalies that help determine
why the incident occurred in the
first place.
The intelligence gleaned from the data
is then shared with departments within
an organization, management, and even
outside organizations like public safety
entities. Using the intelligence generated
by data analysis, an organization can then
implement protocols, change processes
and procedures and educate employees to
help prevent similar incidents from occurring
in the future.
Incident management comprises four
steps: plan and prepare, identify and respond,
document and collaborate, and
analyze. For every incident, there are patterns
and points of reference that precipitate
the actual event. These may include
someone suddenly coming in to work earlier
and staying later or accessing particular
information frequently and for longer
than normal periods of time, or may be
the seemingly simple act of a door being
propped open. Advanced incident management
tools bring all that incident-related
data that’s been collected from multiple
sources and systems together, as the
solution reviews the datasets to look for
commonalities and identify relationships
between occurrences.
Those relationships are not always obvious
to a human and may not even occur
at a single site, but they are discoverable by
the algorithms within the software solution.
By analyzing data on a global scale,
incident management solutions identify
specific events, activities and occurrences
that have something in common. Those
commonalities paint a picture of a potential
threat, hazard, or vulnerability. The
information is then used to identify patterns
and anomalies that normally precipitate
an incident, which are instrumental
in instituting processes that allow incident
management tools to identify a potential
occurrence before it occurs, rather than
detecting an event as it is happening.
Incident management also automates
these tasks, allowing intelligence to be
developed much more efficiently and effectively
than human analysis. Data is collected
immediately, creating a record of
an event, such as an unauthorized access
attempt, that may signal the beginning
of the type of pattern that could lead to
a threat occurring. If an organization relies
on human analysis, this type of needle
in the haystack might have been missed.
With automated software solutions, the
software does the work of gathering data
from any system or sensor for later use.
For security practitioners, awareness is
the main key to incident management and
risk mitigation. Despite the sheer amount
of data available today, collecting and analyzing
the relevant information is crucial
for managing threats and vulnerabilities,
so it’s important to not only understand
but embrace the HV3 data model created
by the complex integration and data flow
among people and the growing number of
internet-connected devices and systems.
There is no need to fear the mountain
of data. It’s also important to understand
that these functions cannot be adequately
performed by humans only. Having an advanced
incident management system that
automates data management and analysis
in your security toolbox ensures that raw
data is transformed into the actionable
intelligence necessary to mitigate and prevent
incidents from occurring in the first
place, ensuring a higher level of safety and
security for organizations.
This article originally appeared in the February 2015 issue of Security Today.