1,278 research outputs found
GANVO: Unsupervised Deep Monocular Visual Odometry and Depth Estimation with Generative Adversarial Networks
In the last decade, supervised deep learning approaches have been extensively
employed in visual odometry (VO) applications, which is not feasible in
environments where labelled data is not abundant. On the other hand,
unsupervised deep learning approaches for localization and mapping in unknown
environments from unlabelled data have received comparatively less attention in
VO research. In this study, we propose a generative unsupervised learning
framework that predicts 6-DoF pose camera motion and monocular depth map of the
scene from unlabelled RGB image sequences, using deep convolutional Generative
Adversarial Networks (GANs). We create a supervisory signal by warping view
sequences and assigning the re-projection minimization to the objective loss
function that is adopted in multi-view pose estimation and single-view depth
generation network. Detailed quantitative and qualitative evaluations of the
proposed framework on the KITTI and Cityscapes datasets show that the proposed
method outperforms both existing traditional and unsupervised deep VO methods
providing better results for both pose estimation and depth recovery.Comment: ICRA 2019 - accepte
Addressing Challenging Place Recognition Tasks using Generative Adversarial Networks
Place recognition is an essential component of Simultaneous Localization And
Mapping (SLAM). Under severe appearance change, reliable place recognition is a
difficult perception task since the same place is perceptually very different
in the morning, at night, or over different seasons. This work addresses place
recognition as a domain translation task. Using a pair of coupled Generative
Adversarial Networks (GANs), we show that it is possible to generate the
appearance of one domain (such as summer) from another (such as winter) without
requiring image-to-image correspondences across the domains. Mapping between
domains is learned from sets of images in each domain without knowing the
instance-to-instance correspondence by enforcing a cyclic consistency
constraint. In the process, meaningful feature spaces are learned for each
domain, the distances in which can be used for the task of place recognition.
Experiments show that learned features correspond to visual similarity and can
be effectively used for place recognition across seasons.Comment: Accepted for publication in IEEE International Conference on Robotics
and Automation (ICRA), 201
Lagrangian Duality in 3D SLAM: Verification Techniques and Optimal Solutions
State-of-the-art techniques for simultaneous localization and mapping (SLAM)
employ iterative nonlinear optimization methods to compute an estimate for
robot poses. While these techniques often work well in practice, they do not
provide guarantees on the quality of the estimate. This paper shows that
Lagrangian duality is a powerful tool to assess the quality of a given
candidate solution. Our contribution is threefold. First, we discuss a revised
formulation of the SLAM inference problem. We show that this formulation is
probabilistically grounded and has the advantage of leading to an optimization
problem with quadratic objective. The second contribution is the derivation of
the corresponding Lagrangian dual problem. The SLAM dual problem is a (convex)
semidefinite program, which can be solved reliably and globally by
off-the-shelf solvers. The third contribution is to discuss the relation
between the original SLAM problem and its dual. We show that from the dual
problem, one can evaluate the quality (i.e., the suboptimality gap) of a
candidate SLAM solution, and ultimately provide a certificate of optimality.
Moreover, when the duality gap is zero, one can compute a guaranteed optimal
SLAM solution from the dual problem, circumventing non-convex optimization. We
present extensive (real and simulated) experiments supporting our claims and
discuss practical relevance and open problems.Comment: 10 pages, 4 figure
Radar-on-Lidar: metric radar localization on prior lidar maps
Radar and lidar, provided by two different range sensors, each has pros and
cons of various perception tasks on mobile robots or autonomous driving. In
this paper, a Monte Carlo system is used to localize the robot with a rotating
radar sensor on 2D lidar maps. We first train a conditional generative
adversarial network to transfer raw radar data to lidar data, and achieve
reliable radar points from generator. Then an efficient radar odometry is
included in the Monte Carlo system. Combining the initial guess from odometry,
a measurement model is proposed to match the radar data and prior lidar maps
for final 2D positioning. We demonstrate the effectiveness of the proposed
localization framework on the public multi-session dataset. The experimental
results show that our system can achieve high accuracy for long-term
localization in outdoor scenes
- …