16,208 research outputs found
Motion estimation through efficient matching of a reduced number of reliable singular points
Motion estimation in video sequences is a classical intensive computational task that is required for a wide range of applications. Many different methods have been proposed to reduce the computational complexity, but the achieved reduction is not enough to allow real time operation in a non-specialized hardware. In this paper an efficient selection of singular points for fast matching between consecutive images is presented, which allows to achieve real time operation. The selection of singular points lies in finding the image points that are robust to the noise and the aperture problem. This is accomplished by imposing restrictions related to the gradient magnitude and the cornerness. The neighborhood of each singular point is characterized by a complex descriptor vector, which presents a high robustness to illumination changes and small variations in the 3D camera viewpoint. The matching between singular points of consecutive images is performed by maximizing a similarity measure based on the previous descriptor vector. The set of correspondences yields a sparse motion vector field that accurately outlines the image motion. In order to demonstrate the efficiency of this approach, a video stabilization application has been developed, which uses the sparse motion vector field as input. Excellent results have been obtained in synthetic and real sequences, demonstrating the efficiency of the proposed motion estimation technique
Automatic Feature-Based Stabilization of Video with Intentional Motion through a Particle Filter
Video sequences acquired by a camera mounted on a hand held device or a mobile platform are affected by unwanted shakes and jitters. In this situation, the performance of video applications, such us motion segmentation and tracking, might dramatically be decreased. Several digital video stabilization approaches have been proposed to overcome this problem. However, they are mainly based on motion estimation techniques that are prone to errors, and thus affecting the stabilization performance. On the other hand, these techniques can only obtain a successfully stabilization if the intentional camera motion is smooth, since they incorrectly filter abrupt changes in the intentional motion. In this paper a novel video stabilization technique that overcomes the aforementioned problems is presented. The motion is estimated by means of a sophisticated feature-based technique that is robust to errors, which could bias the estimation. The unwanted camera motion is filtered, while the intentional motion is successfully preserved thanks to a Particle Filter framework that is able to deal with abrupt changes in the intentional motion. The obtained results confirm the effectiveness of the proposed algorith
Trifocal Relative Pose from Lines at Points and its Efficient Solution
We present a new minimal problem for relative pose estimation mixing point
features with lines incident at points observed in three views and its
efficient homotopy continuation solver. We demonstrate the generality of the
approach by analyzing and solving an additional problem with mixed point and
line correspondences in three views. The minimal problems include
correspondences of (i) three points and one line and (ii) three points and two
lines through two of the points which is reported and analyzed here for the
first time. These are difficult to solve, as they have 216 and - as shown here
- 312 solutions, but cover important practical situations when line and point
features appear together, e.g., in urban scenes or when observing curves. We
demonstrate that even such difficult problems can be solved robustly using a
suitable homotopy continuation technique and we provide an implementation
optimized for minimal problems that can be integrated into engineering
applications. Our simulated and real experiments demonstrate our solvers in the
camera geometry computation task in structure from motion. We show that new
solvers allow for reconstructing challenging scenes where the standard two-view
initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science
Foundation under Grant No. DMS-1439786 while most authors were in residence
at Brown University's Institute for Computational and Experimental Research
in Mathematics -- ICERM, in Providence, R
Single and multiple stereo view navigation for planetary rovers
© Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover.
The absence of global positioning systems (GPS) in space, added to the limitations
of wheel odometry makes autonomous navigation based on these two techniques - as
done in the literature - an inviable solution and necessitates the use of other approaches.
That, among other reasons, motivates this work to use solely visual data to solve the
robot’s Egomotion problem.
The homogeneity of Mars’ terrain makes the robustness of the low level image
processing technique a critical requirement. In the first part of the thesis, novel solutions
are presented to tackle this specific problem. Detection of robust features against
illumination changes and unique matching and association of features is a sought after
capability. A solution for robustness of features against illumination variation is proposed
combining Harris corner detection together with moment image representation.
Whereas the first provides a technique for efficient feature detection, the moment images
add the necessary brightness invariance. Moreover, a bucketing strategy is used
to guarantee that features are homogeneously distributed within the images. Then, the
addition of local feature descriptors guarantees the unique identification of image cues.
In the second part, reliable and precise motion estimation for the Mars’s robot is
studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous
Localisation And Mapping (VSLAM) is investigated, proposing enhancements
and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation
techniques are explored. Alternative photogrammetry reprojection concepts
are tested. Lastly, data fusion techniques are proposed to deal with the integration of
multiple stereo view data.
Our robust visual scheme allows good feature repeatability. Because of this,
dimensionality reduction of the feature data can be used without compromising the
overall performance of the proposed solutions for motion estimation. Also, the developed
Egomotion techniques have been extensively validated using both simulated and
real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot
motion estimation are introduced, presenting interesting benefits. The obtained results
prove the innovative methods presented here to be accurate and reliable approaches
capable to solve the Egomotion problem in a Mars environment
Finite Element Based Tracking of Deforming Surfaces
We present an approach to robustly track the geometry of an object that
deforms over time from a set of input point clouds captured from a single
viewpoint. The deformations we consider are caused by applying forces to known
locations on the object's surface. Our method combines the use of prior
information on the geometry of the object modeled by a smooth template and the
use of a linear finite element method to predict the deformation. This allows
the accurate reconstruction of both the observed and the unobserved sides of
the object. We present tracking results for noisy low-quality point clouds
acquired by either a stereo camera or a depth camera, and simulations with
point clouds corrupted by different error terms. We show that our method is
also applicable to large non-linear deformations.Comment: additional experiment
A stochastic large deformation model for computational anatomy
In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework. By accounting for randomness in a particular setup which is crafted to fit the geometrical properties of LDDMM, we formulate the template estimation problem for landmarks with noise and give two methods for efficiently estimating the parameters of the noise fields from a prescribed data set. One method directly approximates the time evolution of the variance of each landmark by a finite set of differential equations, and the other is based on an Expectation-Maximisation algorithm. In the second method, the evaluation of the data likelihood is achieved without registering the landmarks, by applying bridge sampling using a stochastically perturbed version of the large deformation gradient flow algorithm. The method and the estimation algorithms are experimentally validated on synthetic examples and shape data of human corpora callosa
Video analysis based vehicle detection and tracking using an MCMC sampling framework
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences
- …