Always On

Always On

Assessing physical security and availability via IIoT

The momentum we’ve seen the industrial Internet of Things (IIoT) gain in recent years has led to an increased awareness of security threats. As a direct result, we’re seeing more connected devices alongside our access control, cameras, alarms and other investments in perimeter security being deployed to mitigate this risk.

One example of this melding between physical security and building automation is evidenced by the merger last September between Johnson Controls, a top provider of building efficiency solutions, and Tyco, a key provider of fire and security solutions. This merger opens the door to future technological innovations in smart buildings that can bring the real value of the IIoT to life.

However, for most companies in the building security industry, full transition to the IIoT is a long road ahead, and many are still dependent on their existing perimeter security systems.

Every company takes their approach to securing their systems differently and, more often than not, they’ve opened themselves up to risk along the way in some form or another. On one end of the spectrum, a company could have a single server supporting its security system tucked away in an equipment closet somewhere. On the other, a company may have updated their technology to deploy virtualized servers that efficiently support a range of physical security or other building systems. Both have their risks and if a company considers the IIoT and expanded connectivity to be on its horizon, it needs to understand where these risks exist.

What Are the Risks?

Imagine an access control system that relies on a dedicated server that can be anywhere from a decade to a century old. It’s been out of sight, out of mind literally until the day it reaches its inevitable end of life, which is when it really starts causing more significant problems. It may initially deny you access. It may create a lapse in security with a lost or corrupted database that supports perimeter card readers. It may even require the rebuilding of certain databases manually. Ironically, even with virtualization, you can actually compound risk further by creating a single point of failure where a range of criticallyimportant security systems can be taken down all at once.

At a major U.S. international airport, there is a glimpse into just how far the effects of downtime on security systems can reach. This particular airport maintains an extensive automated infrastructure but was experiencing too much unplanned downtime with two key systems: physical badge tracking/door access security systems and the baggage handling system for security screening, storage, sorting and transportation of baggage.

Outages of these systems required costly human intervention to maintain customer service levels, minimize safety risks and ensure compliance with Federal Transportation Security Administration (TSA) requirements. The airport was forced to deploy staff to manually monitor every door within their secure areas, leading to additional labor costs, TSA fines or, worse, the potential shutdown of airport operations and significant lost revenue.

These effects may be magnified even further in buildings without on-site IT staff available to move quickly to deal with server failure in an emergency. So if the 24/7/365 availability of such physical security systems has become absolutely critical in an IIoT world, what are the best approaches to maintaining server availability? I’ve outlined the three most common below:

Data backups and restores. Perhaps the most basic approach to server availability is to have basic backup, data-replication and failover procedures in place. In particular, this will help speed the restoration of an application and help preserve data following a server failure.

If backups are only occurring daily, however, you may only be guaranteeing 99 percent availability and significant amounts of data can be lost. Considering this equates to an average of 87.5 hours of downtime per year, or more than 90 minutes of unplanned downtime per week, most businesses cannot tolerate losing critical building security and life-safety applications for that long.

High availability (HA). HA includes both hardware- and softwarebased approaches to reducing downtime. HA clusters combine two or more servers running with an identical configuration and use software to keep application data synchronized on all servers. If there is a single failure, another server takes over with little to no disruption. These can be complex to deploy and manage, however, and require that you license software on all cluster servers, which is an added cost.

On the other hand, HA software is designed to detect evolving problems and proactively prevent downtime. Using predictive analytics to identify, report and handle faults before an outage occurs, this software can run on low-cost commodity hardware and still offer the proactive advantage over HA clusters. HA provides from 99.95 percent to 99.99 percent (or “four nines”) uptime, equating, on average, from 52 minutes to 4.5 hours of downtime per year—significantly better than basic backup.

Continuous availability (CA). Finally, through the use of sophisticated software or specialized servers, “always on” solutions aim to reduce downtime to its lowest practical level. Using software, each application lives on two virtual machines, mirroring all data in realtime. If a single machine fails, applications can still run on the other with no interruption or data loss. If a single component fails, a healthy component from the second system automatically takes over.

CA software can also facilitate disaster recovery with multi-site capabilities. If a server is destroyed by fire or sprinklers, for instance, the machine at the other location will take over seamlessly. This software-based approach prevents data loss, is simple to configure and manage, requires no special IT skills and delivers upwards of 99.999 percent availability (about one minute of downtime a year)—all on standard hardware.CA server systems rely on specialized servers purpose-built to prevent failures from happening. They integrate hardware, software and services for simplified management and feature both redundant components and error-detection software running in a virtualized environment.

Vulnerability of an Operation

Of the three availability approaches listed above, the one that is the best fit for your building security applications will depend on a range of factors. First, it’s important to determine the state of your current security automation infrastructure. While your system architecture may be billed as “high availability,” this term is often used to describe a wide range of failover strategies—some more fault-tolerant than others.

In the event of a server failure, will there be a lapse in security? Will critical data be lost? Is failover automatic or does it require manual intervention? Assessing the potential vulnerabilities of your infrastructure can help you avoid a false sense of security that could come back to haunt you. This insight will help define your needs and guide you toward the most appropriate availability strategies for your security environment.

How Much Availability Do You Need?

Deploying the highest level of CA for all of your security applications across the enterprise would obviously be ideal, but the cost may not make sense in every instance and not all security applications require the highest level of uptime. Some applications, for instance, may work best in a multi-tiered approach. This could involve a centrally- located “master server” controlling a network of site servers that regularly cache data back to the master.

Here, you might configure the master server as CA, but decide that HA is adequate for the site servers given their workloads. The criticality of each server’s function within the security automation architecture will ultimately inform this decision, and carefully assessing your requirements for each will help balance real-world needs with the realities of your budget.

The Airport’s Solution

To wrap up the airport example from above, they determined the security of their implementation was critical enough and had enough business impact that they would need a full fault-tolerant solution that ensured continuous availability. Their CA solution needed to be deployed across multiple physical servers geographically separated by about a mile. After installing CA software, the airport experienced zero unplanned downtime, was able to scale their systems after opening another terminal for 55 million more passengers annually, and even maintained seamless operations after a major water leak flooded one of the airport’s datacenters. At the end of the day, performing a comprehensive assessment of availability needs ended up saving the airport from a variety of complicated security issues down the line.

Putting Your Strategy in Place

Whether you are expanding or upgrading existing building security infrastructure to support an IIoT environment, or building a new infrastructure from the ground up, consider these tips.

  • Think about server availability as a core requirement—planning early can help you avoid problems that crop up when trying to “tack on” an availability solution later in the architecture and deployment cycle.
  • Carefully assess the availability requirements of all your security applications and determine how much downtime you can afford for each. This will help guide you to the appropriate availability solution needed for each application.
  • Be wary of classic, non-virtualized cluster systems that require many interactions between the security application and cluster software, increasing complexity and making management more challenging. Solutions that minimize intrusion into the application space are more flexible and easier to manage.
  • Work with building automation vendors that are familiar with availability and have the knowledge to guide you to solutions that are suitable for your unique deployment.

Server availability needs to be the cornerstone of any perimeter security strategy and will alleviate a variety of concerns for operators, both in the day-to-day management of security operations and when emergency situations arise that affect security. Ultimately, having a clear idea of what your perimeter security system needs to keep critical applications available is the most important step to maintaining security in an increasingly-connected, “always on” world.

This article originally appeared in the March 2017 issue of Security Today.

Featured

  • New Report Reveals Top Trends Transforming Access Controller Technology

    Mercury Security, a provider in access control hardware and open platform solutions, has published its Trends in Access Controllers Report, based on a survey of over 450 security professionals across North America and Europe. The findings highlight the controller’s vital role in a physical access control system (PACS), where the device not only enforces access policies but also connects with readers to verify user credentials—ranging from ID badges to biometrics and mobile identities. With 72% of respondents identifying the controller as a critical or important factor in PACS design, the report underscores how the choice of controller platform has become a strategic decision for today’s security leaders. Read Now

  • Overwhelming Majority of CISOs Anticipate Surge in Cyber Attacks Over the Next Three Years

    An overwhelming 98% of chief information security officers (CISOs) expect a surge in cyber attacks over the next three years as organizations face an increasingly complex and artificial intelligence (AI)-driven digital threat landscape. This is according to new research conducted among 300 CISOs, chief information officers (CIOs), and senior IT professionals by CSC1, the leading provider of enterprise-class domain and domain name system (DNS) security. Read Now

  • ASIS International Introduces New ANSI-Approved Investigations Standard

    • Guard Services
  • Cloud Security Alliance Brings AI-Assisted Auditing to Cloud Computing

    The Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, today introduced an innovative addition to its suite of Security, Trust, Assurance and Risk (STAR) Registry assessments with the launch of Valid-AI-ted, an AI-powered, automated validation system. The new tool provides an automated quality check of assurance information of STAR Level 1 self-assessments using state-of-the-art LLM technology. Read Now

  • Report: Nearly 1 in 5 Healthcare Leaders Say Cyberattacks Have Impacted Patient Care

    Omega Systems, a provider of managed IT and security services, today released new research that reveals the growing impact of cybersecurity challenges on leading healthcare organizations and patient safety. According to the 2025 Healthcare IT Landscape Report, 19% of healthcare leaders say a cyberattack has already disrupted patient care, and more than half (52%) believe a fatal cyber-related incident is inevitable within the next five years. Read Now

New Products

  • HD2055 Modular Barricade

    Delta Scientific’s electric HD2055 modular shallow foundation barricade is tested to ASTM M50/P1 with negative penetration from the vehicle upon impact. With a shallow foundation of only 24 inches, the HD2055 can be installed without worrying about buried power lines and other below grade obstructions. The modular make-up of the barrier also allows you to cover wider roadways by adding additional modules to the system. The HD2055 boasts an Emergency Fast Operation of 1.5 seconds giving the guard ample time to deploy under a high threat situation.

  • Luma x20

    Luma x20

    Snap One has announced its popular Luma x20 family of surveillance products now offers even greater security and privacy for home and business owners across the globe by giving them full control over integrators’ system access to view live and recorded video. According to Snap One Product Manager Derek Webb, the new “customer handoff” feature provides enhanced user control after initial installation, allowing the owners to have total privacy while also making it easy to reinstate integrator access when maintenance or assistance is required. This new feature is now available to all Luma x20 users globally. “The Luma x20 family of surveillance solutions provides excellent image and audio capture, and with the new customer handoff feature, it now offers absolute privacy for camera feeds and recordings,” Webb said. “With notifications and integrator access controlled through the powerful OvrC remote system management platform, it’s easy for integrators to give their clients full control of their footage and then to get temporary access from the client for any troubleshooting needs.”

  • Unified VMS

    AxxonSoft introduces version 2.0 of the Axxon One VMS. The new release features integrations with various physical security systems, making Axxon One a unified VMS. Other enhancements include new AI video analytics and intelligent search functions, hardened cybersecurity, usability and performance improvements, and expanded cloud capabilities