Montag, 28. April 2014

The limits of optical zoom


How image quality is affected by optical zoom
To record videos, images need to be projected on the cameras image sensor. Earlier models used electron tubes, todays cameras use semiconductor based sensors, CMOS for example.
The optical zoom is basically a telescope in front of the camera. Typical magnification rates are 25x for TV cameras, 50x for sports objectives and 100x for telescopes. Though, there is no limit for optical zoom.
But, the image quality decreases with increasing zoom level. Two problems occur when zooming in: chromatic aberration and blur.
Chromatic aberration causes the image to be divided into all its different colors, comparable to the colors in a rainbow. Because each color has a different index of refraction, each color channel of the image appears to be at another position. For video cameras only red, green and blue are of importance. So the chromatic aberration in video cameras causes the color channels to appear at different positions on screen.
There are three different ways to reduce the chromatic aberration: moving the channels with software to the right position, using higher focal lengths, which makes the cameras bigger and using so called “optical glass” as lens, which is lighter than normal glass and therefore has a reduced refraction. Pure quartz glass is a very good optical glass. Non-optical glasses are also made of quartz glass but with additives, that lower the production cost.
Blur is another problem that occurs when zooming in. This has to do with the wave nature of light and the probability of sharp light getting through the lens. When the diameter of the aperture is too small, not enough light gets trough to make the image appear sharp. Photographers might experience the opposite effect, which is due to enhanced depth of field. When the aperture size is too big, there is too much light, which doesn’t appear as a sharp image.
Smaller exposure times can compensate too much light. Though long exposure time cannot compensate too small aperture sizes and may cause noise in the image. For RGB cameras orange objects cause the most image noise and white objects like walls and buildings cause the least noise.
Finally, it’s hard to tell which camera zoom objective to use for a certain purpose. It is highly recommended to try before you buy, because there are other quality aspects and design issues to consider.


Samstag, 26. April 2014

How is video image stabilization working?

Video Image Stabilization explained

The process of video image stabilization removes undesired vibrations from a video recording. There are two different types of stabilizers: hardware and software based stabilizers. Hardware based stabilizers use electromagnets to stabilize the image by moving optical lenses and prisms. Software based stabilization detects image features, such as object contours, highlights  and shadows and tracks its movement. In this article software based stabilization is explained.

Software based video image stabilization takes place in just three steps, feature detection, movement calculation and movement correction.

First, notable features of an image are detected. Features are regions of an image, which catch the feature detectors attention. There are several different feature detectors available to the public, which are known since the 1980s, for example, “good features to track”.

The movement detector compares two or more images and calculates the movement of each feature.

Then movement correction uses the movement information from the detector to stabilize the image by just moving it in the opposite movement direction.

But, the image moves out of the screen and disappears after some time. Unless you want to do photo stitching, movement of the image is unwanted when recording videos. Though the image movement can be used to measure camera rotation. The resolution of video cameras is far higher than the accuracy of potentiometers or acceleration measurement chips. Video cameras can detect movement of just a few arc seconds.

To avoid moving the image out of the visible screen, the movement detector has to distinguish between camera movement and vibrations. This is done by statistical analysis of the movement, which is comparable to distinguishing the volatility from the moving average of a stock chart. Sophisticated image stabilizers use fast Fourier or cosine transform to move the image into the right position before a shock occurs.

Moving objects confuse the image stabilizer, so moving objects have to be excluded from the stabilization process. By discriminating regions of different movement directions, the image stabilizer can detect moving objects like cars, clouds and swarms of birds and recognize them even in front of a moving background.

Camera rotation is also confusing for the image stabilizer and is far more difficult to exclude from the stabilization process than the problems mentioned above. Because the center of the rotation may be outside of the field of view and the background may move while the camera is rotating, additional statistical analysis is required, which can slow the image stabilizer down. When there is not sufficient computing power, the software can just make a guess.

Finally, stabilized video images are much easier for the eye to watch and increase the compression rate of video streams and files. Watching stabilized videos can reduce stress and help lower the cost of disk storage and data bandwidth.