5 Top Keys to Success for a Strong Network Security Plan
While IT management in most organizations certainly understands and takes seriously the need to protect sensitive data and other logical assets located on the network, executives and financial professionals come to the challenge with varying attitudes. Some are in lock-step with IT but others tend to fall nearer to the alarming and dangerous “head-in-the-sand” or the impossible-to-achieve “eliminate every vulnerability” ends of the spectrum. In the face of a number of high-profile data breaches that have occurred in recent years, both attitudes are misguided at best and create an organizational climate that increases the risk of an information security breach.
Thankfully, some companies recognize the reality of the threats and have created more sensible and realistic plans that balance and align security measures with budgetary and real business objectives. From the security standpoint, access to information and software must be controlled and restricted to avoid the crippling consequences of a security breach. On the business side, however, distributed workforces require the access and tools to allow the firm to quickly and easily make plans and decisions to take advantage of new business opportunities.
Because every business is unique, the details of determining the best security plan vary, but our research in this area has identified five common keys to success that apply to all organizations. These keys provide the overall framework for each company to create and implement the most appropriate security infrastructure that will meet its specific needs.
These five keys are:
1. Separate Networks
Many organizations place a high priority on protecting their networks from the Internet but neglect to adopt strict security standards to protect more sensitive areas of the network from those that might be more vulnerable.
Many well-publicized major breaches have occurred as a result of attackers being able to find a vulnerability in a workstation, application, or other outdated system which they could then exploit as a gateway to make their way through other unrelated – and highly sensitive – systems that were much more protected from direct outside attack simply because these systems were all connected to one big “flat” network with no internal security perimeters. Had internal perimeters been in place, these attacks would have been much more likely to be detected and halted early on in the process, before the attacker had a chance to exfiltrate sensitive data.
It is imperative that organizations implement a network structure that allows different segments to enforce unique data and access requirements and to ensure appropriate scrutiny of data flowing between those areas. Networks should have internal perimeters that align with their functional areas, and reflect the data sensitivity and access requirements for those areas. For example, employees in human resources do not need to access research and development environments, so this activity should be restricted using access control lists (ACL) and administrative segmentation of function between network segments.
In addition to network segmentation, organizations should ensure that system administration functions are conducted from specific subnets and segregated networks. This allows more granular control of who may perform administrative activities, and from which network segment they are authorized to be conducted. Administrators are a favorite target of attackers for the level of access their accounts can provide.
2. More than Anti-Virus
Signature-based anti-virus solutions are an essential first-line defense but are not sufficient to stop real-world malware threats on their own. Based on analysis of the past few years, we estimate that these solutions catch only 46 percent of viruses in the wild.
This is significant because attackers often use malware in their initial assault on a network to leverage a combination of both technical and human vulnerabilities. On top of this fact, malware often disables anti-virus solutions to increase its survivability.
Organizations must also consider monitoring network and email traffic for behavioral signs of malicious activity, including the implementation of technologies that scrutinize network and email traffic for signs of malicious activity related to malware command and control functionality. One example is to include multiple points of detection and visibility by investing in both host- and network-based detection and quarantine capabilities, which significantly increases the chance of detecting an intrusion.
Additionally, for an anti-virus solution to be even 46 percent effective, it must be installed on servers, workstations, and mobile end points, be updated regularly and employ constant scanning. In the course of our incident response engagements, we often locate systems or endpoints with outdated or no anti-virus software installed. This leads us to the third key for strong data security.
3. Update Systems and Software
In 2014, most breaches NTT Group analyzed included systems that were missing important security patches or lacked common security hardening. That same year, 76 percent of those unpatched vulnerabilities for which a patch was available were more than two years old. More startlingly, almost 9 percent were more than 10 years old. Malicious attackers seek out systems with unpatched vulnerabilities as a means of gaining an initial entry into a system or network. A lot can change in terms of technology 10 or even two years, and attackers have become much more sophisticated in the last decade, so systems with unpatched vulnerabilities are essentially unsecured.
Configuration and patch management are not new concepts, but most organizations are still doing poorly in both areas. While many place attention on patching critical and public-facing servers, most of today’s attacks are focused on workstation applications. These include exploiting desktop software like document viewers, web browsers and their plug-ins, which many organizations patch less frequently.
This underscores the importance of organizations extending their attention beyond centralized server resources and protecting every distributed user device that could give an attacker access to the network. Implementing an active, aggressive patch management program can greatly reduce the risk of these common vulnerabilities.
4. Ongoing Vigilance
Our analysis of breaches in 2014 found that some had been in progress for months or longer, and were discovered well after the initial compromise and after data had already been lost. This is indicative of the drawn-out, patient campaigns attackers often conduct to avoid detection as they extend their initial access to increase their control through the victim’s environment and that most organizations are not effectively monitoring their networks. In some cases, malware and IDS systems had reported these breaches, which either went unnoticed by security personnel or were regarded as false-positive alerts and were ignored.
As stated earlier, a determined hacker can usually breach a network’s perimeter, meaning detection and immediate action are essential. The only way to accomplish this is with an ongoing monitoring program to detect anomalous activities that indicate an ongoing breach. But to be effective, monitoring needs to include not only system logs and alerts, but also go farther to include ongoing behavioral analysis and detection of anomalous activity in an environment. For example, if large amounts of encrypted data are suddenly being exchanged between systems that had never communicated before, this could indicate a breach.
Security engineers can help organizations identify the logs, devices and systems that provide the most value and context for this ongoing monitoring. One consideration is to log at the network layer and the application layer, as well as logging externally facing IDS/IPS, firewalls and WAFs. Organizations should also consider directory services, anti-virus, file integrity monitoring, databases, web applications, proxies and DLP. Outbound traffic is often overlooked in monitoring but because the goal of most attackers is to exfiltrate data from a compromised network, it can serve as a key indicator of a breach.
5. Planned Response
Surveys indicate that most organizations have no functional plan for responding to incidents. In the event of an attack, this lack of an actionable plan actually extends the duration and associated losses, putting the organization at much greater risk.
Because attackers are patient and determined to gain access to any and all networks possible, companies need to prepare an incident plan that covers numerous common factors. These factors should be based on observations after an incident occurs and should include:
- Whether the organization is actually under attack, and whether the alerts indicate an actual breach
- The person or people within the organization who should be responsible for responding
- The organization’s primary interest in retaining evidence of the breach, including restoring service, protecting data or other priority
- Those systems and/or data that should receive the highest-priority response
- Third-party vendors, such as an organization’s ISP, as well as the contacts (with contact phone numbers) for those providers.
By taking these five keys into account, an organization will be able to create a strong security foundation to protect data breach and loss, while ensuring these efforts are aligned with overall business objectives. In the event of an attack – which has become more of a “when” than an “if” proposition – the organization will be well-positioned to detect a breach and prepared to take the appropriate steps to mitigate its effects and recover more quickly to ensure a much more favorable outcome in the long term.
Posted by Christopher Camejo on Jun 28, 2016