624 research outputs found
Using Self-Contradiction to Learn Confidence Measures in Stereo Vision
Learned confidence measures gain increasing importance for outlier removal
and quality improvement in stereo vision. However, acquiring the necessary
training data is typically a tedious and time consuming task that involves
manual interaction, active sensing devices and/or synthetic scenes. To overcome
this problem, we propose a new, flexible, and scalable way for generating
training data that only requires a set of stereo images as input. The key idea
of our approach is to use different view points for reasoning about
contradictions and consistencies between multiple depth maps generated with the
same stereo algorithm. This enables us to generate a huge amount of training
data in a fully automated manner. Among other experiments, we demonstrate the
potential of our approach by boosting the performance of three learned
confidence measures on the KITTI2012 dataset by simply training them on a vast
amount of automatically generated training data rather than a limited amount of
laser ground truth data.Comment: This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2016. The copyright was transfered to IEEE
(https://www.ieee.org). The official version of the paper will be made
available on IEEE Xplore (R) (http://ieeexplore.ieee.org). This version of
the paper also contains the supplementary material, which will not appear
IEEE Xplore (R
Guided Stereo Matching
Stereo is a prominent technique to infer dense depth maps from images, and
deep learning further pushed forward the state-of-the-art, making end-to-end
architectures unrivaled when enough data is available for training. However,
deep networks suffer from significant drops in accuracy when dealing with new
environments. Therefore, in this paper, we introduce Guided Stereo Matching, a
novel paradigm leveraging a small amount of sparse, yet reliable depth
measurements retrieved from an external source enabling to ameliorate this
weakness. The additional sparse cues required by our method can be obtained
with any strategy (e.g., a LiDAR) and used to enhance features linked to
corresponding disparity hypotheses. Our formulation is general and fully
differentiable, thus enabling to exploit the additional sparse inputs in
pre-trained deep stereo networks as well as for training a new instance from
scratch. Extensive experiments on three standard datasets and two
state-of-the-art deep architectures show that even with a small set of sparse
input cues, i) the proposed paradigm enables significant improvements to
pre-trained networks. Moreover, ii) training from scratch notably increases
accuracy and robustness to domain shifts. Finally, iii) it is suited and
effective even with traditional stereo algorithms such as SGM.Comment: CVPR 201
Real-time self-adaptive deep stereo
Deep convolutional neural networks trained end-to-end are the
state-of-the-art methods to regress dense disparity maps from stereo pairs.
These models, however, suffer from a notable decrease in accuracy when exposed
to scenarios significantly different from the training set, e.g., real vs
synthetic images, etc.). We argue that it is extremely unlikely to gather
enough samples to achieve effective training/tuning in any target domain, thus
making this setup impractical for many applications. Instead, we propose to
perform unsupervised and continuous online adaptation of a deep stereo network,
which allows for preserving its accuracy in any environment. However, this
strategy is extremely computationally demanding and thus prevents real-time
inference. We address this issue introducing a new lightweight, yet effective,
deep stereo architecture, Modularly ADaptive Network (MADNet) and developing a
Modular ADaptation (MAD) algorithm, which independently trains sub-portions of
the network. By deploying MADNet together with MAD we introduce the first
real-time self-adaptive deep stereo system enabling competitive performance on
heterogeneous datasets.Comment: Accepted at CVPR2019 as oral presentation. Code Available
https://github.com/CVLAB-Unibo/Real-time-self-adaptive-deep-stere
- …