298 research outputs found
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth
and camera motion estimation from unstructured video sequences. We achieve this
by simultaneously training depth and camera pose estimation networks using the
task of view synthesis as the supervisory signal. The networks are thus coupled
via the view synthesis objective during training, but can be applied
independently at test time. Empirical evaluation on the KITTI dataset
demonstrates the effectiveness of our approach: 1) monocular depth performing
comparably with supervised methods that use either ground-truth pose or depth
for training, and 2) pose estimation performing favorably with established SLAM
systems under comparable input settings.Comment: Accepted to CVPR 2017. Project webpage:
https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner
DeepSLAM: A Robust Monocular SLAM System with Unsupervised Deep Learning
In this paper, we propose DeepSLAM, a novel unsupervised deep learning-based visual Simultaneous Localization and Mapping (SLAM) system. The DeepSLAM training is fully unsupervised since it only requires stereo imagery instead of annotating ground-truth poses. Its testing takes a monocular image sequence as the input. Therefore, it is a monocular SLAM paradigm. DeepSLAM consists of several essential components, including Mapping-Net, Tracking-Net, Loop-Net and a graph optimization unit. Specifically, the Mapping-Net is an encoder and decoder architecture for describing the 3D structure of the environment while the Tracking-Net is a Recurrent Convolutional Neural Network (RCNN) architecture for capturing the camera motion. The Loop-Net is a pre-trained binary classifier for detecting loop closures. DeepSLAM can simultaneously generate pose estimate, depth map and outlier rejection mask. We evaluate its performance on various datasets, and find that DeepSLAM achieves good performance in terms of pose estimation accuracy, and is robust in some challenging scenes
GANVO: Unsupervised Deep Monocular Visual Odometry and Depth Estimation with Generative Adversarial Networks
In the last decade, supervised deep learning approaches have been extensively
employed in visual odometry (VO) applications, which is not feasible in
environments where labelled data is not abundant. On the other hand,
unsupervised deep learning approaches for localization and mapping in unknown
environments from unlabelled data have received comparatively less attention in
VO research. In this study, we propose a generative unsupervised learning
framework that predicts 6-DoF pose camera motion and monocular depth map of the
scene from unlabelled RGB image sequences, using deep convolutional Generative
Adversarial Networks (GANs). We create a supervisory signal by warping view
sequences and assigning the re-projection minimization to the objective loss
function that is adopted in multi-view pose estimation and single-view depth
generation network. Detailed quantitative and qualitative evaluations of the
proposed framework on the KITTI and Cityscapes datasets show that the proposed
method outperforms both existing traditional and unsupervised deep VO methods
providing better results for both pose estimation and depth recovery.Comment: ICRA 2019 - accepte
Pose Graph Optimization for Unsupervised Monocular Visual Odometry
Unsupervised Learning based monocular visual odometry (VO) has lately drawn
significant attention for its potential in label-free leaning ability and
robustness to camera parameters and environmental variations. However,
partially due to the lack of drift correction technique, these methods are
still by far less accurate than geometric approaches for large-scale odometry
estimation. In this paper, we propose to leverage graph optimization and loop
closure detection to overcome limitations of unsupervised learning based
monocular visual odometry. To this end, we propose a hybrid VO system which
combines an unsupervised monocular VO called NeuralBundler with a pose graph
optimization back-end. NeuralBundler is a neural network architecture that uses
temporal and spatial photometric loss as main supervision and generates a
windowed pose graph consists of multi-view 6DoF constraints. We propose a novel
pose cycle consistency loss to relieve the tensions in the windowed pose graph,
leading to improved performance and robustness. In the back-end, a global pose
graph is built from local and loop 6DoF constraints estimated by NeuralBundler
and is optimized over SE(3). Empirical evaluation on the KITTI odometry dataset
demonstrates that 1) NeuralBundler achieves state-of-the-art performance on
unsupervised monocular VO estimation, and 2) our whole approach can achieve
efficient loop closing and show favorable overall translational accuracy
compared to established monocular SLAM systems.Comment: Accepted to ICRA'201
- …