9,005 research outputs found
Cascade Residual Learning: A Two-stage Convolutional Neural Network for Stereo Matching
Leveraging on the recent developments in convolutional neural networks
(CNNs), matching dense correspondence from a stereo pair has been cast as a
learning problem, with performance exceeding traditional approaches. However,
it remains challenging to generate high-quality disparities for the inherently
ill-posed regions. To tackle this problem, we propose a novel cascade CNN
architecture composing of two stages. The first stage advances the recently
proposed DispNet by equipping it with extra up-convolution modules, leading to
disparity images with more details. The second stage explicitly rectifies the
disparity initialized by the first stage; it couples with the first-stage and
generates residual signals across multiple scales. The summation of the outputs
from the two stages gives the final disparity. As opposed to directly learning
the disparity at the second stage, we show that residual learning provides more
effective refinement. Moreover, it also benefits the training of the overall
cascade network. Experimentation shows that our cascade residual learning
scheme provides state-of-the-art performance for matching stereo
correspondence. By the time of the submission of this paper, our method ranks
first in the KITTI 2015 stereo benchmark, surpassing the prior works by a
noteworthy margin.Comment: Accepted at ICCVW 2017. The first two authors contributed equally to
this pape
Semantic Visual Localization
Robust visual localization under a wide range of viewing conditions is a
fundamental problem in computer vision. Handling the difficult cases of this
problem is not only very challenging but also of high practical relevance,
e.g., in the context of life-long localization for augmented reality or
autonomous robots. In this paper, we propose a novel approach based on a joint
3D geometric and semantic understanding of the world, enabling it to succeed
under conditions where previous approaches failed. Our method leverages a novel
generative model for descriptor learning, trained on semantic scene completion
as an auxiliary task. The resulting 3D descriptors are robust to missing
observations by encoding high-level 3D geometric and semantic information.
Experiments on several challenging large-scale localization datasets
demonstrate reliable localization under extreme viewpoint, illumination, and
geometry changes
D2-Net: A Trainable CNN for Joint Detection and Description of Local Features
In this work we address the problem of finding reliable pixel-level
correspondences under difficult imaging conditions. We propose an approach
where a single convolutional neural network plays a dual role: It is
simultaneously a dense feature descriptor and a feature detector. By postponing
the detection to a later stage, the obtained keypoints are more stable than
their traditional counterparts based on early detection of low-level
structures. We show that this model can be trained using pixel correspondences
extracted from readily available large-scale SfM reconstructions, without any
further annotations. The proposed method obtains state-of-the-art performance
on both the difficult Aachen Day-Night localization dataset and the InLoc
indoor localization benchmark, as well as competitive performance on other
benchmarks for image matching and 3D reconstruction.Comment: Accepted at CVPR 201
- …