1,841 research outputs found

    Cross-Scale Cost Aggregation for Stereo Matching

    Full text link
    Human beings process stereoscopic correspondence across multiple scales. However, this bio-inspiration is ignored by state-of-the-art cost aggregation methods for dense stereo correspondence. In this paper, a generic cross-scale cost aggregation framework is proposed to allow multi-scale interaction in cost aggregation. We firstly reformulate cost aggregation from a unified optimization perspective and show that different cost aggregation methods essentially differ in the choices of similarity kernels. Then, an inter-scale regularizer is introduced into optimization and solving this new optimization problem leads to the proposed framework. Since the regularization term is independent of the similarity kernel, various cost aggregation methods can be integrated into the proposed general framework. We show that the cross-scale framework is important as it effectively and efficiently expands state-of-the-art cost aggregation methods and leads to significant improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.Comment: To Appear in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014 (poster, 29.88%

    Vergence control system for stereo depth recovery

    Get PDF
    This paper describes a vergence control algorithm for a 3D stereo recovery system. This work has been developed within framework of the project ROBTET. This project has the purpose of designing a Teleoperated Robotic System for live power lines maintenance. The tasks involved suppose the automatic calculation of path for standard tasks, collision detection to avoid electrical shocks, force feedback and accurate visual data, and the generation of collision free real paths. To accomplish these tasks the system needs an exact model of the environment that is acquired through an active stereoscopic head. A cooperative algorithm using vergence and stereo correlation is shown. The proposed system is carried out through an algorithm based on the phase correlation, trying to keep the vergence on the interest object. The sharp vergence changes produced by the variation of the interest objects are controlled through an estimation of the depth distance generated by a stereo correspondence system. In some elements of the scene, those aligned with the epipolar plane, large errors in the depth estimation as well as in the phase correlation, are produced. To minimize these errors a laser lighting system is used to help fixation, assuring an adequate vergence and depth extraction .The work presented in this paper has been supported by electric utility IBERDROLA, S.A. under project PIE No. 132.198

    Learning sparse representations of depth

    Full text link
    This paper introduces a new method for learning and inferring sparse representations of depth (disparity) maps. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding. This enables learning from data corrupted with spatially varying noise or uncertainty, typically obtained by laser range scanners or structured light depth cameras. Sparse representations are learned from the Middlebury database disparity maps and then exploited in a two-layer graphical model for inferring depth from stereo, by including a sparsity prior on the learned features. Since they capture higher-order dependencies in the depth structure, these priors can complement smoothness priors commonly used in depth inference based on Markov Random Field (MRF) models. Inference on the proposed graph is achieved using an alternating iterative optimization technique, where the first layer is solved using an existing MRF-based stereo matching algorithm, then held fixed as the second layer is solved using the proposed non-stationary sparse coding algorithm. This leads to a general method for improving solutions of state of the art MRF-based depth estimation algorithms. Our experimental results first show that depth inference using learned representations leads to state of the art denoising of depth maps obtained from laser range scanners and a time of flight camera. Furthermore, we show that adding sparse priors improves the results of two depth estimation methods: the classical graph cut algorithm by Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page

    Pedestrian detection in uncontrolled environments using stereo and biometric information

    Get PDF
    A method for pedestrian detection from challenging real world outdoor scenes is presented in this paper. This technique is able to extract multiple pedestrians, of varying orientations and appearances, from a scene even when faced with large and multiple occlusions. The technique is also robust to changing background lighting conditions and effects, such as shadows. The technique applies an enhanced method from which reliable disparity information can be obtained even from untextured homogeneous areas within a scene. This is used in conjunction with ground plane estimation and biometric information,to obtain reliable pedestrian regions. These regions are robust to erroneous areas of disparity data and also to severe pedestrian occlusion, which often occurs in unconstrained scenarios
    • 

    corecore