What's New At The Edge
        How network cameras are reshaping the surveillance landscape
        
        
			- By Fredrik Nilsson
- Jun 04, 2012
Even though the first product launch of a
  network camera happened in 1996, it was
  a little more than a decade ago before the
  device began appearing in the physical security
  arena. Crude by today’s standards,
  the pioneering technology boasted the
  ability to stream video at a sluggish one
  frame per second. Even more, the cameras
  required a minimum of 20 lux to deliver any sort of image clarity.
  Nowadays, network cameras stream HDTV-quality video at
  speeds up to 60 fps, can operate in lighting conditions as low
  as 0.008 lux and offer a long list of other features.
  
So, how did IP-based cameras achieve such a quantum leap in
  performance in such a short time frame? Like most other computer
  technology, network camera performance follows Moore’s
  Law, which describes the trend where digital electronic devices
  double in power and speed every 18 months. In the case of network
  cameras, this trend specifically encompasses processing
  performance, image sensors and pixel count. With so much computing
  power now residing in cameras, manufacturers have been
  able to push more processing out to the edge of the surveillance
  network to provide better image quality, better scalability and
  greater functionality at an overall lower system cost.
  But how have these performance advances affected surveillance
  in the real world? Let’s look at six key features.
  
Higher Resolution and Frame Rate
  
Higher resolution means capturing more details in the image area
  and therefore increasing the forensic value and usability of video.
  True analog systems are limited to the NTSC/PAL standard,
  which means their maximum resolution is 720x480—corresponding
  to 0.4 megapixels—and often only a quarter of that resolution
  is recorded. However, in the IP world, camera resolution has
  undergone an exponential evolution to megapixel and HDTVquality
  image clarity even faster than Moore’s Law predicted.
  
Megapixel-resolution cameras first appeared around 2005,
  providing more detail for identifying people and objects and covering
  a larger field of view. The first megapixel cameras offered
  1280x1024-pixel resolution, basically a scaled-up version of VGA
  with the same 4:3 aspect ratio. But as the technology improved
  from 1.3 to 5.2 megapixels, resolution jumped to a 2560x2048-
  pixels format. The drawbacks came in two forms: the aspect ratio
  didn’t always match new 16:9 monitors, and the frame rate
  for higher megapixel cameras was limited to around 10 fps for 5
  megapixels and even lower for higher megapixel cameras, such as
  the 8- and 10-megapixel products now on the market.
  
Like the megapixel cameras, HDTV cameras deliver much
  higher resolution than VGA cameras. Unlike megapixel, for a
  camera to call itself HDTV it must strictly adhere to SMPTE
  standards for resolution, frame rate (30 fps), color fidelity and
  16:9 aspect ratio. Like the TVs we buy for our homes, this ensures
  that every camera classified as HDTV will deliver consistent performance
  no matter who manufactures it.
  
In addition, HDTV-compatible cameras support advanced
  H.264 compression technology, which drastically reduces bandwidth
  consumption and storage requirements. HDTV network
  cameras come in the same three formats as flatscreen TVs today:
  720p (1280x720 pixels), 1080p (progressive, 1920x1080 pixels)
  and 1080i (interlaced, 1920x1080 pixels).
  
Better Video Compression
  
Advances in compression standards, along with improved processing
  power at the edge for real-time compression, have also
  evolved to significantly reduce image file size—and therefore
  bandwidth consumption and video storage—without adversely
  affecting visual quality. In other words, without H.264 compression,
  HDTV-quality video wouldn’t be possible in the surveillance
  world. The compression standards evolved to focus efforts
  on frame consistency for better efficiency, such as reducing color
  nuances and color resolution, removing small invisible parts of
  the picture, and comparing adjacent images and removing details
  that are unchanged between video frames:
  
Motion JPEG treated each frame as a still JPEG picture. It
  prevented dropped frames during transmission, but the compression
  ratio was low for video sequences because it made no use of
  video compression techniques.
  
MPEG-1 used a more efficient coding of video sequences, but
  the focus was on compression ratio rather than picture quality.
  
MPEG-2 employed more advanced techniques to enhance
  video quality through resolution and frame rate, but it was done
  at the expense of higher bandwidth usage. MPEG-2 is used for
  standard-definition DVD movies.
  
MPEG-4 accommodated both ends of the spectrum, streaming
  lower-quality video to mobile devices requiring lower bandwidth
  consumption and streaming extremely high quality for
  applications with almost unlimited bandwidth. The MPEG-4
  standard has multiple parts.
  
H.264, aka MPEG-4 Part 10, is the newest video compression
  technology in IP video, representing a huge step forward for video
  surveillance applications. Without compromising image quality,
  H.264 can reduce the size of a digital video file by more than
  80 percent compared with Motion JPEG compression and as
  much as 50 percent compared with the MPEG-4 Part 2 standard.
  With far less network bandwidth and storage space required for
  a video file, users save money and achieve a much higher video
  quality for a given bit rate. This advanced compression standard
  is being used in the entertainment industry for Blu-ray movies
  and online video. 
Greater Light Sensitivity
While higher resolution and more effective compression have a
  major impact on image quality streaming from the camera, image
  processing technology also plays an important role, especially
  in difficult lighting conditions. Network camera manufacturers
  today have greatly improved a camera’s ability to capture quality
  images in fairly complex lighting conditions—from very low light
  to wide variations in light throughout the day or within a single
  scene—and have surpassed analog in light performance.
Low light. In the past, manufacturers have addressed low-light
  problems by integrating more light-sensitive sensors, day/night
  filters, IR illuminators and thermal imaging into their cameras.
  As new cameras have come on the market with higher processing
  power, manufacturers can employ even more advanced filtering
  techniques to further improve light sensitivity.
Lightfinder technology is the latest innovation in extremely
  low-light surveillance. It works in concert with a network camera’s
  sensor and lens to find light in a scene that it can use to
  stream color video even at night. Sophisticated image processing
  software sets the degree of filtering and sharpening to capture
  the best image possible. Highly sensitive to low light, a network
  camera enhanced with Lightfinder can maintain tight focus with
  minimal noise and lifelike color fidelity from dusk to dawn as well
  as in full sunlight.
Wide dynamic range. WDR incorporates techniques for handling
  a wide range of lighting conditions within a single scene,
  such as extremely bright and darkly shadowed corners or backlit
  situations where a person is standing in front of a sunlit window. A
  standard surveillance camera would inevitably produce barely visible
  images of objects in dark areas. A network camera equipped
  with WDR, on the other hand, combines different exposures of
  different objects in a scene, depending on the prevailing light to
  ensure nearly uniform visibility across the field of view.
In-camera Intelligence
With the convergence of improved image quality and sufficient
  processing power, manufacturers have started to incorporate intelligent
  algorithms in-camera to push video analytics to the edge.
  The power of the latest chipsets have made it possible for network
  cameras to detect motion, sound and tampering attempts, like
  blocking or spray painting the lens; recognize license plates; and
  identify objects crossing an imaging line.
Intelligent network cameras also can count people, perform
  dwell time analysis for retailers and even track customer flow
  through the aisles of a store. This is all being done in the camera
  today. Some of the more advanced motion detection analytics
  can also filter out the natural rustling of leaves and the swaying
  of branches for better success rates.
Because video analytics often require very specific knowledge
  about the surveillance application, camera manufacturers typically
  partner with expert software companies. The additional processing
  power built into the camera makes the edge an attractive and robust
  platform for third parties to develop any number of custom analytics
  applications—think of it like an App Store for surveillance.
There are several advantages to performing analytics at the
  edge. First, raw and uncompressed video contains more information
  that can be used in an analysis. Second, analyzing the video
  before compressing it and sending it over the network reduces
  bandwidth consumption. Third, in-camera analytics provide better
  system scalability because they avoid overloading a central
  server with too many video streams requiring analysis.
Local Storage Option
Advances in SD memory card technology, formerly found only
  in consumer electronics, have created new possibilities for storage
  at the edge.
A few short years ago, a 1 GB card could cost upward of $100.
  Today, a 32 GB card can be purchased for less than $50. SD cards
  with the potential to hold upward of 2 TB of storage are already
  on the horizon, which could equate to years’ worth of video storage
  at the edge.
Applying H.264 video compression, a customer can now record
  15 images per second of high-quality, 1080p HDTV resolution
  for days and even weeks on a single card. A network camera
  will be able to offer a level of fault tolerance for network outages
  by recording locally—even for security applications that require
  high resolution and real-time recording rates.
The industry is taking this one step further. While SD cards
  traditionally were used for redundant storage in critical surveillance
  applications, this year at ISC West we saw manufacturers
  and software developers leveraging the IP camera as the recorder.
  Individual cameras with a single switch or router can now become
  complete security systems unto themselves without the need
  for central storage or even a computer running the system. This
  camera-as-the-recorder model will be a major trend for moving
  IP video into small-camera-count installations by eliminating the
  cost for recording hardware.
Smaller Form Factor
The miniaturization of integrated circuit technology has allowed
  manufacturers to deliver more processing power in a smaller
  chipset. The smaller chips generate less heat, a primary culprit
  in picture noise.
But better image quality is only part of the story. Smaller,
  more powerful chips allow network camera manufacturers to
  downsize their camera form factors while maintaining the same
  capabilities as their larger cousins. Today, a palm-sized PTZ IP
  camera can discretely monitor a retail store, bank or hotel lobby
  without being an obtrusive menace to aesthetics—yet it will deliver
  the same HDTV-quality and intelligent benefits as the largest
  cameras today.
Where Moore’s Law Will Lead Us
Based on past progress, industry experts foresee network cameras
  maintaining the same forward trajectory as other computer
  technology. The degree of light sensitivity will become even more
  acute while resolution and compression will continue to improve
  across an ever-wider dynamic range.
In addition, with advances in chip technology and processing
  power, the potential for third-party development of video analytics
  applications will become even more prevalent. If we accept
  Moore’s Law as an accurate predictor, improvements at the edge
will continue to grow exponentially for the foreseeable future.
        
        
        
        
        
        
        
        
        
        
        
        
        This article originally appeared in the June 2012 issue of Security Today.