3 research outputs found

    An Efficient Point-Matching Method Based on Multiple Geometrical Hypotheses

    Get PDF
    Point matching in multiple images is an open problem in computer vision because of the numerous geometric transformations and photometric conditions that a pixel or point might exhibit in the set of images. Over the last two decades, different techniques have been proposed to address this problem. The most relevant are those that explore the analysis of invariant features. Nonetheless, their main limitation is that invariant analysis all alone cannot reduce false alarms. This paper introduces an efficient point-matching method for two and three views, based on the combined use of two techniques: (1) the correspondence analysis extracted from the similarity of invariant features and (2) the integration of multiple partial solutions obtained from 2D and 3D geometry. The main strength and novelty of this method is the determination of the point-to-point geometric correspondence through the intersection of multiple geometrical hypotheses weighted by the maximum likelihood estimation sample consensus (MLESAC) algorithm. The proposal not only extends the methods based on invariant descriptors but also generalizes the correspondence problem to a perspective projection model in multiple views. The developed method has been evaluated on three types of image sequences: outdoor, indoor, and industrial. Our developed strategy discards most of the wrong matches and achieves remarkable F-scores of 97%, 87%, and 97% for the outdoor, indoor, and industrial sequences, respectively

    Integrating Visual and Geometric Consistency for Pose Estimation

    No full text
    International audienceIn this work, we tackle the problem of estimating the relative pose between two cameras in urban environments in the presence of additional information provided by low quality localization and orientation sensors. An M-estimator based approach provides an elegant solution for the fusion between inertial and vision data, but it is sensitive to the prior importance of the visual matches between the two views. In addition to using cues extracted from local visual similarity, we propose to rely at the same time on learned associations provided by the global geometrical coherence. A conservative weighting scheme for combining the two types of cues has been proposed and validated successfully on an urban dataset

    Integrating Visual and Geometric Consistency for Pose Estimation

    No full text
    International audienceIn this work, we tackle the problem of estimating the relative pose between two cameras in urban environments in the presence of additional information provided by low quality localization and orientation sensors. An M-estimator based approach provides an elegant solution for the fusion between inertial and vision data, but it is sensitive to the prior importance of the visual matches between the two views. In addition to using cues extracted from local visual similarity, we propose to rely at the same time on learned associations provided by the global geometrical coherence. A conservative weighting scheme for combining the two types of cues has been proposed and validated successfully on an urban dataset
    corecore