20 research outputs found
Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation
Learning-based approaches to robotic manipulation are limited by the
scalability of data collection and accessibility of labels. In this paper, we
present a multi-task domain adaptation framework for instance grasping in
cluttered scenes by utilizing simulated robot experiments. Our neural network
takes monocular RGB images and the instance segmentation mask of a specified
target object as inputs, and predicts the probability of successfully grasping
the specified object for each candidate motor command. The proposed transfer
learning framework trains a model for instance grasping in simulation and uses
a domain-adversarial loss to transfer the trained model to real robots using
indiscriminate grasping data, which is available both in simulation and the
real world. We evaluate our model in real-world robot experiments, comparing it
with alternative model architectures as well as an indiscriminate grasping
baseline.Comment: ICRA 201
The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots
Deep networks have brought significant advances in robot perception, enabling
to improve the capabilities of robots in several visual tasks, ranging from
object detection and recognition to pose estimation, semantic scene
segmentation and many others. Still, most approaches typically address visual
tasks in isolation, resulting in overspecialized models which achieve strong
performances in specific applications but work poorly in other (often related)
tasks. This is clearly sub-optimal for a robot which is often required to
perform simultaneously multiple visual recognition tasks in order to properly
act and interact with the environment. This problem is exacerbated by the
limited computational and memory resources typically available onboard to a
robotic platform. The problem of learning flexible models which can handle
multiple tasks in a lightweight manner has recently gained attention in the
computer vision community and benchmarks supporting this research have been
proposed. In this work we study this problem in the robot vision context,
proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art
algorithms in this novel challenging scenario. We also define a new evaluation
protocol, better suited to the robot vision setting. Results shed light on the
strengths and weaknesses of existing approaches and on open issues, suggesting
directions for future research.Comment: This work has been submitted to IROS/RAL 201
Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images
We propose a simple and efficient method for exploiting synthetic images when
training a Deep Network to predict a 3D pose from an image. The ability of
using synthetic images for training a Deep Network is extremely valuable as it
is easy to create a virtually infinite training set made of such images, while
capturing and annotating real images can be very cumbersome. However, synthetic
images do not resemble real images exactly, and using them for training can
result in suboptimal performance. It was recently shown that for exemplar-based
approaches, it is possible to learn a mapping from the exemplar representations
of real images to the exemplar representations of synthetic images. In this
paper, we show that this approach is more general, and that a network can also
be applied after the mapping to infer a 3D pose: At run time, given a real
image of the target object, we first compute the features for the image, map
them to the feature space of synthetic images, and finally use the resulting
features as input to another network which predicts the 3D pose. Since this
network can be trained very effectively by using synthetic images, it performs
very well in practice, and inference is faster and more accurate than with an
exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for
3D object pose estimation from color images, and the NYU dataset for 3D hand
pose estimation from depth maps. We show that it allows us to outperform the
state-of-the-art on both datasets.Comment: CVPR 201