40,321 research outputs found
Implementation and Validation of Video Stabilization using Simulink
A fast video stabilization technique based on Gray-coded bit-plane (GCBP) matching for translational motion is implemented and tested using various image sequences. This technique performs motion estimation using GCBP of image sequences which greatly reduces the computational load. In order to further improve computational efficiency, the three-step search (TSS) is used along with GCBP matching to perform a competent search during correlation measure calculation. The entire technique has been implemented in Simulink to perform in real-time
Recommended from our members
A low bit-rate video-coding algorithm based upon variable pattern selection
Recent research into pattern representation of moving regions in blocked-based motion estimation and compensation in video sequences, has focused mainly upon using a fixed number of regular shaped patterns. These are used to match the macroblocks in a frame that have two distinct regions involving static background and moving objects. In this paper a new Variable Pattern Selection (VPS) algorithm is presented which selects a preset number of best-matched patterns from a pattern codebook of regular shaped patterns. While more patterns are used than in the previous work, the performance of the VPS algorithm in using variable length coding, by exploiting the frequency of the best-matched patterns, leads to a higher compression ratio, without degrading the overall image quality
Motion and disparity estimation with self adapted evolutionary strategy in 3D video coding
Real world information, obtained by humans is three dimensional (3-D). In experimental user-trials, subjective assessments have clearly demonstrated the increased impact of 3-D pictures compared to conventional flat-picture techniques. It is reasonable, therefore, that we humans want an imaging system that produces pictures that are as natural and real as things we see and experience every day. Three-dimensional imaging and hence, 3-D television (3DTV) are very promising approaches expected to satisfy these desires. Integral imaging, which can capture true 3D color images with only one camera, has been seen as the right technology to offer stress-free viewing to audiences of more than one person. In this paper, we propose a novel approach to use Evolutionary Strategy (ES) for joint motion and disparity estimation to compress 3D integral video sequences. We propose to decompose the integral video sequence down to viewpoint video sequences and jointly exploit motion and disparity redundancies to maximize the compression using a self adapted ES. A half pixel refinement algorithm is then applied by interpolating macro blocks in the previous frame to further improve the video quality. Experimental results demonstrate that the proposed adaptable ES with Half Pixel Joint Motion and Disparity Estimation can up to 1.5 dB objective quality gain without any additional computational cost over our previous algorithm.1Furthermore, the proposed technique get similar objective quality compared to the full search algorithm by reducing the computational cost up to 90%
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
A bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images
© . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper we bring the tools of the Simultaneous Localization and Map Building (SLAM) problem from a rigid to a deformable domain and use them to simultaneously recover the 3D shape of non-rigid surfaces and the sequence of poses of a moving camera. Under the assumption that the surface shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, can be probabilistically formulated as a maximum a posteriori estimate and solved using an iterative least squares optimization. In addition, the probabilistic formulation we propose is very general and allows introducing different constraints without requiring any extra complexity. As a proof of concept, we show that local inextensibility constraints that prevent the surface from stretching can be easily integrated.
An extensive evaluation on synthetic and real data, demonstrates that our method has several advantages over current non-rigid shape from motion approaches. In particular, we show that our solution is robust to large amounts of noise and outliers and that it does not need to track points over the whole sequence nor to use an initialization close from the ground truth.Peer ReviewedPostprint (author's final draft
Cellular neural networks for motion estimation and obstacle detection
Obstacle detection is an important part of Video Processing because it is indispensable for a collision prevention of autonomously navigating moving objects. For example, vehicles driving without human guidance need a robust prediction of potential obstacles, like other vehicles or pedestrians. Most of the common approaches of obstacle detection so far use analytical and statistical methods like motion estimation or generation of maps. In the first part of this contribution a statistical algorithm for obstacle detection in monocular video sequences is presented. The proposed procedure is based on a motion estimation and a planar world model which is appropriate to traffic scenes. The different processing steps of the statistical procedure are a feature extraction, a subsequent displacement vector estimation and a robust estimation of the motion parameters. Since the proposed procedure is composed of several processing steps, the error propagation of the successive steps often leads to inaccurate results. In the second part of this contribution it is demonstrated, that the above mentioned problems can be efficiently overcome by using Cellular Neural Networks (CNN). It will be shown, that a direct obstacle detection algorithm can be easily performed, based only on CNN processing of the input images. Beside the enormous computing power of programmable CNN based devices, the proposed method is also very robust in comparison to the statistical method, because is shows much less sensibility to noisy inputs. Using the proposed approach of obstacle detection in planar worlds, a real time processing of large input images has been made possible
- …