Giving Your Surveillance Network a Workout
Gauging the strength of your network will answer many questions
- By James Marcella
- Nov 03, 2014
With so many applications
riding on today’s
networks, bandwidth
consumption on Wide
Area Network (WAN)
connections, particularly
over the Internet, creates real challenges
for network professionals on a
budget. Add network video to the pipeline
and it makes you wonder whether
the infrastructure will eventually collapse
under the increased traffic. If you
look at current fiber optics technology;
however, you’re bound to feel much
more confident that today’s networks
will be able to handle the load.
It is common knowledge that transmitting
data via light allows fiber optics to
deliver data speeds several hundred times
faster and over much longer distances
than conventional copper cable. There
have even been demonstrations where data
has been pushed through a single strand
of fiber over a certain length at an amazing
186 Gbps.
In comparison, the fastest available, residential,
copper-based Internet service in the
country tops out at 50 MBps. Furthermore,
unlike copper, which needs multiple amplifications
to function over long lengths, the
light that passes through fiber optic cable
doesn’t diminish, even over many miles.
This is great news for network surveillance
integrators who are building solutions that
often include streaming high-definition video
to and from remote locations.
But for LAN solutions, generally
100MBps to the edge with multi-gigabit
backbones is needed. So, streaming the
video locally doesn’t create bandwidth
consumption concerns especially when
you can leverage technologies like VLANs.
Supplanting 1080p with 4K
The importance of network capacity will
be challenged as 4K technology gains momentum
in the coming year. This will be
the most anticipated advancement that
camera manufactures will deliver. There is
a lot to look forward to with resolution increasing
to 8.3 megapixels—four times the
resolution of 1080p—greater color fidelity
and a potential quadrupling of frame
rates to 120 fps. The unprecedented image
detail, however, will mandate more advanced
compression technology in order
to avoid clogging the pipeline or compromising
frame rates and resolution.
While it may take a while for 4K cameras
to permeate the market, the more immediate
impact for surveillance will be in
the displays used to view video. At 8MP,
security professionals will be able to view
four 1080p images in their native resolution
or eight 720p cameras. The current
price of 4K displays is still a barrier to
entry. But, with the price of new television
technology dropping year to year,
I anticipate we will be seeing affordable displays for professional security installations
sometime in 2015.
Adding More Intelligence
to the Edge
Image processing chips, application specific
integrated circuits (ASICs), continue
to increase in power every 18 months. The
latest generation will be available in network
cameras by late 2014. In addition
to significantly enhancing image usability,
these chips deliver excess computing capacity
that analytics programs are leveraging.
Whether it be cross line detection,
people counting, queue management or
simply video motion detection, expect to
see greater reliability with the next generation
of in-camera analytics.
As integrators place more reliable intelligence
at the edge, bandwidth consumption
actually drops since video doesn’t get
recorded or transmitted unless an event
takes place. The following are examples of
such intelligence:
Adding hardware intelligence. Software
applications aren’t the only area of
intelligence being introduced at the edge.
Smart hardware advancements, such as
Optimized IR, Auto Rotation and leveling
assistants, are also helping integrators
simplify and expedite installation.
Optimized infrared cameras. These
cameras automatically adjust the IR illumination
angle with the focal range of the
camera so that the two match. Once a twostep
process involving separate products,
optimized IR cameras are able to adjust
both the illumination angle and the focal
range through the same HTML interface,
saving time and ensuring the accuracy of
the install.
With better IR illumination there is
less noise in the image which significantly
reduces file size, and therefore, bandwidth
consumption.
Auto rotation. This feature involves placing
a small accelerometer in the camera so
that it knows which orientation to use when
delivering the image for viewing. Similar to
the way a smartphone rotates the image as
it is moved, the camera will automatically
detect if you are hanging it upside down
or placing it onto a table. These cameras
even recognize when they are being put in
Corridor Format mode, a specialized format
where 16:9 aspect ratios are turned on
their side to monitor long hallways without
wasting pixels on the walls.
In the past, configuring corridor format
required logging into the administration
pages of the camera and manually
selecting the correct image orientation.
Depending on the scenario, this might increase
bandwidth slightly as more of the
activity in the hallway is being captured,
rather than the blank walls.
Level assistant. Leveraging the same incamera
accelerometer, the level assistant
combines flashing LEDs in conjunction
with an audible beep that notifies the installer
when the camera is level. Think of
it like the backup assistant in a car. As the
car gets closer to an object, the interval between
beeps gets shorter until it becomes
one long sound when the car is right on
top of something.
In the case of the level assistant, the
camera is level when the LEDs stop flashing
and become solid green, and the beep
is one continuous sound. When combined
with remote focus, remote zoom, the leveling
assistant and the auto rotation, integrators
no longer need to carry a portable
monitor to the installation site.
Improving Image Usability
The increased performance of ASIC
chips has direct implications for improving
image usability as much as it does for
analytics. Image usability rather than image
quality (something that wedding and
portrait photographers tend to focus on)
defines the parameters that security professionals
need to meet the operational requirements
of identification, recognition
or detection.
When combined with environmental
factors, the following criteria dictate
which camera technologies to use:
Handling contrasting lighting conditions.
Starkly contrasting lighting conditions in
a single field-of-view—bright sunlight and
deep shadows—demand sophisticated algorithms
for the camera to transmit an image
with any usable detail. The increased
processing power in today’s cameras means
that they can leverage wide dynamic range
features to take multiple exposures, combining
them into one image for each frame
recorded. They are also able to draw out
significant details in both the light and
dark areas of the image by applying local
tone mapping to each exposure, instead of
just the final frame. These improved images
provide a higher degree of forensic value
should an incident need to be reviewed.
Dealing with vibrations. Oftentimes
when cameras are mounted on poles or
subject to other environmental factors
that induce vibrations, the movement of
the camera gets directly translated into
moving images. At the very least, the wobbly
images are distracting. At worst, they
might mask activities that would otherwise
cause an operator to take action.
Vibration in the video wreaks havoc
on analytics that often get triggered and
creates false alarm conditions. The latest
generation of image stabilization technology
combines advanced algorithms and
cropped images from higher resolution
sensors to deliver stable video for both
viewing and recording.
Ask your camera manufacturer to
show you their latest offerings and compare
that to what you have become used
to with previous generations. You’re going
to be surprised at the results. Not only will
the video be so much smoother, it will also
be consuming considerably less bandwidth
and storage.
Eliminating “barrel distortion.” This
common video challenge appears when
using a wide angle lens. In order to cover
a wider field-of-view while still maintaining
the necessary “pixels on target,” users
were forced to accept images that bent at
the edges. A special lens could correct this
problem, but they were more expensive than
traditional wide angle lenses. Today, however,
wide angle distortion can be corrected
by in-camera software before the video is
streamed to the recording device, so there is
no need to budget for more advanced lenses.
The money spent by the IT industry on
making networks bigger, faster and more
secure easily dwarfs whatever the physical
security industry spends on creating
the latest advances in camera technology.
Therefore, one can say with confidence
that whatever demands surveillance video
places on the network, whether full frame
HD or 4K resolution, IT will find a way
to handle it. Basically, if you’re willing to
pay for the extra bandwidth and storage,
just about any solution is possible. But, as
tempting as it is to acquire the latest and
greatest security solutions, before investing
in any advanced technology, take time
to determine whether your security needs
really warrant the added expense.
This article originally appeared in the November 2014 issue of Security Today.