 
        
        
        Upgrading To A New Era
        Keeping the ever-increasing demand for anytime, anywhere data under control
        
        
			- By Terence Martin Breslin
- Mar 01, 2012
The recent eruption of mobile applications coupled with increasing
  IP-based data traffic on mobile devices is fueling the uptake of 4G
  technologies and driving the migration to faster data rates. In order
  to allow an all-IP-based services platform, service providers are
  upgrading existing networks and migration strategies. Both handset
  vendors and carriers are busy rolling out application portals in order to differentiate
  their offerings and provide better monetization and ARPU. The increasing
  trend for “anywhere, anytime” data technology is pushed by user mobility and
  subscriber need.
  
The move to mobile connectivity and mobile broadband, and more overall data
  traffic, is powering this expansion. Subscribers are dictating what applications they
  want to use and where they want to use them. This is pushing operators to move to
  an all-IP core by reducing network complexity and lowering costs.
  
With this much network transformation, the migration won’t happen overnight.
  Network operators still need to support a hybrid network for the foreseeable
  future, interconnecting next-generation systems and devices with the various types
  of existing platforms. The future of the network is becoming more complex, and
  the journey toward a converged all-IP network brings a whole new set of network
  performance and management guidelines to be implemented by IT organizations.
  Real-time network troubleshooting, monitoring and provisioning must be implemented
  strategically, as they are driven by the ever-important need to maintain
  and manage the subscriber experience.
Real-time monitoring of network traffic has proven to be particularly important
  for analyzing and diagnosing network performance and, consequently, the
  subscriber’s quality of experience (QoE).
Legacy Tools Fall Short of Real-Time Monitoring Needs
Performance and complexity problems are only made worse by fragmented monitoring
  approaches. The constant push for more efficient connectivity is leaving traditional
  approaches toward network monitoring incapable of managing network
  components on service provider and enterprise infrastructures. The accumulation
  of outdated network monitoring components coupled with the growing complexity
  of data on the network is causing several major problems.
Traditionally, placing a host of tools into the network was the solution to improving
  visibility of network performance. While this strategy does solve some
  problems, it introduces others. The inability to access a particular point in a network
  with multiple tools is often considered the biggest challenge IT managers
  face. This limitation, combined with the type of overhead management used in
  legacy monitoring schemes, creates a network “blind spot” and makes troubleshooting
  inefficient because there often are different sets of tools scattered across
  the network in different physical locations, each with individual management software
  that is inoperable with the software of other vendors.
Monitoring costs become increasingly expensive as network management becomes
  more inefficient and network engineers have limited accessibility to certain
  points in the network yet still have to manage an immense overflow of data. Reduced
  ROI and increased costs from the lack of fast and efficient troubleshooting is impacting
  revenues across the board—and adding performance and complexity problems.
Smarter Solutions: The Economics of
  Network Intelligence Optimization
Network operators—especially those in the telecom, enterprise or government
  industries—must carefully consider the price-performance, agility, diversity and
  intelligent capabilities of a traffic capture solution before making a decision. They
  must develop a complete and forward-looking strategy for network monitoring
  and management. There are a rising number of macro trends that, depending on
  future requirements, network operators should be mindful of when determining
  their network monitoring needs; technology development, “flattening the network”
  and purchasing economics are examples.
The continued expansion of IP only looks to accelerate the need to displace
  legacy systems with a next-generation network. With the network “flattening,”
  more distributed IP components in the network will be created, thus effectively
  generating more potential points of failure. A broader range of IP services will
  be rolled out as a result, further increasing the complexity of the network. Added
  complexity creates more opportunities for points of monitoring; the monitoring
  infrastructure should be “flat” and flexible across the whole network.
The Network Intelligence Optimization framework is laying the foundation
  for a smarter network monitoring solution. In order to withstand the increase in
  speed and complexities, the traffic-capture layer must continue to be utilized in the
  hardware because it is necessary to have a deeper awareness of packets and applications,
  along with a more dynamic handling of them.
With the need to improve service delivery while having tighter budget control,
  it is no surprise that network managers must now do more with less. However,
  the network monitoring optimization framework enables an organization to shift
  from a high initial CAPEX business model to a lower and variable CAPEX model
  when looking at the network monitoring component of the budget.
Network managers can do more in other areas such as network forensics, lawful
  intercepts and behavioral analysis now that there is less to worry about. With managed
  service providers (MSPs) having become mainstream, and
  primarily focused on monetization of QoS/QoE rather than on
  monitoring network elements and packets, the layered approach
  to network monitoring is essential to enabling the business model
and differentiation in such network environments.
        
        
        
        
        
        
        
        
        
        
        
        
        This article originally appeared in the March 2012 issue of Security Today.