Samstag, 26. April 2014

How is video image stabilization working?

Video Image Stabilization explained

The process of video image stabilization removes undesired vibrations from a video recording. There are two different types of stabilizers: hardware and software based stabilizers. Hardware based stabilizers use electromagnets to stabilize the image by moving optical lenses and prisms. Software based stabilization detects image features, such as object contours, highlights  and shadows and tracks its movement. In this article software based stabilization is explained.

Software based video image stabilization takes place in just three steps, feature detection, movement calculation and movement correction.

First, notable features of an image are detected. Features are regions of an image, which catch the feature detectors attention. There are several different feature detectors available to the public, which are known since the 1980s, for example, “good features to track”.

The movement detector compares two or more images and calculates the movement of each feature.

Then movement correction uses the movement information from the detector to stabilize the image by just moving it in the opposite movement direction.

But, the image moves out of the screen and disappears after some time. Unless you want to do photo stitching, movement of the image is unwanted when recording videos. Though the image movement can be used to measure camera rotation. The resolution of video cameras is far higher than the accuracy of potentiometers or acceleration measurement chips. Video cameras can detect movement of just a few arc seconds.

To avoid moving the image out of the visible screen, the movement detector has to distinguish between camera movement and vibrations. This is done by statistical analysis of the movement, which is comparable to distinguishing the volatility from the moving average of a stock chart. Sophisticated image stabilizers use fast Fourier or cosine transform to move the image into the right position before a shock occurs.

Moving objects confuse the image stabilizer, so moving objects have to be excluded from the stabilization process. By discriminating regions of different movement directions, the image stabilizer can detect moving objects like cars, clouds and swarms of birds and recognize them even in front of a moving background.

Camera rotation is also confusing for the image stabilizer and is far more difficult to exclude from the stabilization process than the problems mentioned above. Because the center of the rotation may be outside of the field of view and the background may move while the camera is rotating, additional statistical analysis is required, which can slow the image stabilizer down. When there is not sufficient computing power, the software can just make a guess.

Finally, stabilized video images are much easier for the eye to watch and increase the compression rate of video streams and files. Watching stabilized videos can reduce stress and help lower the cost of disk storage and data bandwidth.

Keine Kommentare:

Kommentar veröffentlichen