3,947 research outputs found
The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots
Deep networks have brought significant advances in robot perception, enabling
to improve the capabilities of robots in several visual tasks, ranging from
object detection and recognition to pose estimation, semantic scene
segmentation and many others. Still, most approaches typically address visual
tasks in isolation, resulting in overspecialized models which achieve strong
performances in specific applications but work poorly in other (often related)
tasks. This is clearly sub-optimal for a robot which is often required to
perform simultaneously multiple visual recognition tasks in order to properly
act and interact with the environment. This problem is exacerbated by the
limited computational and memory resources typically available onboard to a
robotic platform. The problem of learning flexible models which can handle
multiple tasks in a lightweight manner has recently gained attention in the
computer vision community and benchmarks supporting this research have been
proposed. In this work we study this problem in the robot vision context,
proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art
algorithms in this novel challenging scenario. We also define a new evaluation
protocol, better suited to the robot vision setting. Results shed light on the
strengths and weaknesses of existing approaches and on open issues, suggesting
directions for future research.Comment: This work has been submitted to IROS/RAL 201
How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change
Direct visual localization has recently enjoyed a resurgence in popularity
with the increasing availability of cheap mobile computing power. The
competitive accuracy and robustness of these algorithms compared to
state-of-the-art feature-based methods, as well as their natural ability to
yield dense maps, makes them an appealing choice for a variety of mobile
robotics applications. However, direct methods remain brittle in the face of
appearance change due to their underlying assumption of photometric
consistency, which is commonly violated in practice. In this paper, we propose
to mitigate this problem by training deep convolutional encoder-decoder models
to transform images of a scene such that they correspond to a previously-seen
canonical appearance. We validate our method in multiple environments and
illumination conditions using high-fidelity synthetic RGB-D datasets, and
integrate the trained models into a direct visual localization pipeline,
yielding improvements in visual odometry (VO) accuracy through time-varying
illumination conditions, as well as improved metric relocalization performance
under illumination change, where conventional methods normally fail. We further
provide a preliminary investigation of transfer learning from synthetic to real
environments in a localization context. An open-source implementation of our
method using PyTorch is available at https://github.com/utiasSTARS/cat-net.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the
IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane,
Australia, May 21-25, 201
- …