6,652 research outputs found
Stereo and ToF Data Fusion by Learning from Synthetic Data
Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods
On the confidence of stereo matching in a deep-learning era: a quantitative evaluation
Stereo matching is one of the most popular techniques to estimate dense depth
maps by finding the disparity between matching pixels on two, synchronized and
rectified images. Alongside with the development of more accurate algorithms,
the research community focused on finding good strategies to estimate the
reliability, i.e. the confidence, of estimated disparity maps. This information
proves to be a powerful cue to naively find wrong matches as well as to improve
the overall effectiveness of a variety of stereo algorithms according to
different strategies. In this paper, we review more than ten years of
developments in the field of confidence estimation for stereo matching. We
extensively discuss and evaluate existing confidence measures and their
variants, from hand-crafted ones to the most recent, state-of-the-art learning
based methods. We study the different behaviors of each measure when applied to
a pool of different stereo algorithms and, for the first time in literature,
when paired with a state-of-the-art deep stereo network. Our experiments,
carried out on five different standard datasets, provide a comprehensive
overview of the field, highlighting in particular both strengths and
limitations of learning-based strategies.Comment: TPAMI final versio
PROBE-GK: Predictive Robust Estimation using Generalized Kernels
Many algorithms in computer vision and robotics make strong assumptions about
uncertainty, and rely on the validity of these assumptions to produce accurate
and consistent state estimates. In practice, dynamic environments may degrade
sensor performance in predictable ways that cannot be captured with static
uncertainty parameters. In this paper, we employ fast nonparametric Bayesian
inference techniques to more accurately model sensor uncertainty. By setting a
prior on observation uncertainty, we derive a predictive robust estimator, and
show how our model can be learned from sample images, both with and without
knowledge of the motion used to generate the data. We validate our approach
through Monte Carlo simulations, and report significant improvements in
localization accuracy relative to a fixed noise model in several settings,
including on synthetic data, the KITTI dataset, and our own experimental
platform.Comment: In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA'16), Stockholm, Sweden, May 16-21, 201
- …