18,255 research outputs found

    Dictionary learning in stereo imaging

    Get PDF
    This paper presents a new method for learning overcomplete dictionaries adapted to efficient joint representation of stereo images. We first formulate a sparse stereo image model where the multi-view correlation is described by local geometric transforms of dictionary atoms in two stereo views. A maximum-likelihood method for learning stereo dictionaries is then proposed, which includes a multi-view geometry constraint in the probabilistic modeling in order to obtain dictionaries optimized for the joint representation of stereo images. The dictionaries are learned by optimizing the maximum-likelihood objective function using the expectation- maximization algorithm. We illustrate the learning algorithm in the case of omnidirectional images, where we learn scales of atoms in a parametric dictionary. The resulting dictionaries provide both better performance in the joint representation of stereo omnidirectional images and improved multi- view feature matching. We finally discuss and demonstrate the benefits of dictionary learning for distributed scene representation and camera pose estimation

    Implementation and experimental verification of a 3D-to-2D Visual Odometry Algorithm for real-time measurements of a vehicle pose

    Get PDF
    This work describes the implementation of a software for real-time vehicle pose reconstruction using stereo visual odometry algorithms. We firstly guess motion by a linear 2D-to-3D method solving a PnP problem embedded within a RANSAC process to remove outliers. Then, a maximum likelihood motion estimation is performed minimizing a non-linear problem. GPU implementations of feature extraction and matching algorithms are used. We demonstrate an accuracy of better than 1% over 1350 mm of travelope

    Dictionary learning in stereo imaging

    Get PDF
    This paper presents a new method for learning overcomplete dictionaries adapted to efficient joint representation of stereo images. We first formulate a sparse stereo image model where the multi-view correlation is described by local geometric transforms of dictionary atoms in two stereo views. A maximum-likelihood method for learning stereo dictionaries is then proposed, which includes a multi-view geometry constraint in the probabilistic modeling in order to obtain dictionaries optimized for the joint representation of stereo images. The dictionaries are learned by optimizing the maximum-likelihood objective function using the expectation- maximization algorithm. We illustrate the learning algorithm in the case of omnidirectional images, where we learn scales of atoms in a parametric dictionary. The resulting dictionaries provide both better performance in the joint representation of stereo omnidirectional images and improved multi- view feature matching. We finally discuss and demonstrate the benefits of dictionary learning for distributed scene representation and camera pose estimation

    Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixel Measurement Models

    Get PDF
    This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF and stereo data fusion. The proposed MAP-MRF framework is characterized by site-dependent range values, a rather important feature since it can be used both to improve the accuracy and to decrease the computational complexity of standard MAP-MRF approaches. This paper, in order to optimize the site dependent global cost function characteristic of the proposed MAP-MRF approach, also introduces an extension to Loopy Belief Propagation which can be used in other contexts. Experimental data validate the proposed ToF measurements model and the effectiveness of the proposed fusion techniques

    Maximum likelihood estimation of cloud height from multi-angle satellite imagery

    Full text link
    We develop a new estimation technique for recovering depth-of-field from multiple stereo images. Depth-of-field is estimated by determining the shift in image location resulting from different camera viewpoints. When this shift is not divisible by pixel width, the multiple stereo images can be combined to form a super-resolution image. By modeling this super-resolution image as a realization of a random field, one can view the recovery of depth as a likelihood estimation problem. We apply these modeling techniques to the recovery of cloud height from multiple viewing angles provided by the MISR instrument on the Terra Satellite. Our efforts are focused on a two layer cloud ensemble where both layers are relatively planar, the bottom layer is optically thick and textured, and the top layer is optically thin. Our results demonstrate that with relative ease, we get comparable estimates to the M2 stereo matcher which is the same algorithm used in the current MISR standard product (details can be found in [IEEE Transactions on Geoscience and Remote Sensing 40 (2002) 1547--1559]). Moreover, our techniques provide the possibility of modeling all of the MISR data in a unified way for cloud height estimation. Research is underway to extend this framework for fast, quality global estimates of cloud height.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS243 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System

    Get PDF
    Az http://intechweb.org/ alatti "Books" fül alatt kell rákeresni a "Stereo Vision" címre és az 1. fejezetre

    Dense Depth Maps from Epipolar Images

    Get PDF
    Recovering three-dimensional information from two-dimensional images is the fundamental goal of stereo techniques. The problem of recovering depth (three-dimensional information) from a set of images is essentially the correspondence problem: Given a point in one image, find the corresponding point in each of the other images. Finding potential correspondences usually involves matching some image property. If the images are from nearby positions, they will vary only slightly, simplifying the matching process. Once a correspondence is known, solving for the depth is simply a matter of geometry. Real images are composed of noisy, discrete samples, therefore the calculated depth will contain error. This error is a function of the baseline or distance between the images. Longer baselines result in more precise depths. This leads to a conflict: short baselines simplify the matching process, but produce imprecise results; long baselines produce precise results, but complicate the matching process. In this paper, we present a method for generating dense depth maps from large sets (1000's) of images taken from arbitrary positions. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem, removing nearly all ambiguity. The algorithm presented is completely local and for each pixel generates an evidence versus depth and surface normal distribution. In many cases, the distribution contains a clear and distinct global maximum. The location of this peak determines the depth and its shape can be used to estimate the error. The distribution can also be used to perform a maximum likelihood fit of models directly to the images. We anticipate that the ability to perform maximum likelihood estimation from purely local calculations will prove extremely useful in constructing three dimensional models from large sets of images

    Quick and energy-efficient Bayesian computing of binocular disparity using stochastic digital signals

    Get PDF
    Reconstruction of the tridimensional geometry of a visual scene using the binocular disparity information is an important issue in computer vision and mobile robotics, which can be formulated as a Bayesian inference problem. However, computation of the full disparity distribution with an advanced Bayesian model is usually an intractable problem, and proves computationally challenging even with a simple model. In this paper, we show how probabilistic hardware using distributed memory and alternate representation of data as stochastic bitstreams can solve that problem with high performance and energy efficiency. We put forward a way to express discrete probability distributions using stochastic data representations and perform Bayesian fusion using those representations, and show how that approach can be applied to diparity computation. We evaluate the system using a simulated stochastic implementation and discuss possible hardware implementations of such architectures and their potential for sensorimotor processing and robotics.Comment: Preprint of article submitted for publication in International Journal of Approximate Reasoning and accepted pending minor revision

    Near real-time stereo vision system

    Get PDF
    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging
    • …
    corecore