2,912 research outputs found
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
We propose a real-time RGB-based pipeline for object detection and 6D pose
estimation. Our novel 3D orientation estimation is based on a variant of the
Denoising Autoencoder that is trained on simulated views of a 3D model using
Domain Randomization. This so-called Augmented Autoencoder has several
advantages over existing methods: It does not require real, pose-annotated
training data, generalizes to various test sensors and inherently handles
object and view symmetries. Instead of learning an explicit mapping from input
images to object poses, it provides an implicit representation of object
orientations defined by samples in a latent space. Our pipeline achieves
state-of-the-art performance on the T-LESS dataset both in the RGB and RGB-D
domain. We also evaluate on the LineMOD dataset where we can compete with other
synthetically trained approaches. We further increase performance by correcting
3D orientation estimates to account for perspective errors when the object
deviates from the image center and show extended results.Comment: Code available at: https://github.com/DLR-RM/AugmentedAutoencode
Data-Driven Grasp SynthesisâA Survey
We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations
Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation
Interacting with the environment using hands is one of the distinctive
abilities of humans with respect to other species. This aptitude reflects on
the crucial role played by objects\u2019 manipulation in the world that we have
shaped for us. With a view of bringing robots outside industries for supporting
people during everyday life, the ability of manipulating objects
autonomously and in unstructured environments is therefore one of the basic
skills they need. Autonomous manipulation is characterized by great
complexity especially regarding the processing of sensors information to
perceive the surrounding environment. Humans rely on vision for wideranging
tridimensional information, prioprioception for the awareness of
the relative position of their own body in the space and the sense of touch
for local information when physical interaction with objects happens. The
study of autonomous manipulation in robotics aims at transferring similar
perceptive skills to robots so that, combined with state of the art control
techniques, they could be able to achieve similar performance in manipulating
objects. The great complexity of this task makes autonomous
manipulation one of the open problems in robotics that has been drawing
increasingly the research attention in the latest years.
In this work of Thesis, we propose possible solutions to some key components
of autonomous manipulation, focusing in particular on the perception
problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information
to be processed for inferring how to interact with objects. The object
modeling and grasping pipeline based on superquadric functions we designed
meets this need, since it reconstructs the object 3D model from partial
point cloud and computes a suitable hand pose for grasping the object.
Retrieving objects information with touch sensors only is a relevant skill
that becomes crucial when vision is occluded, as happens for instance during
physical interaction with the object. We addressed this problem with
the design of a novel tactile localization algorithm, named Memory Unscented
Particle Filter, capable of localizing and recognizing objects relying solely
on 3D contact points collected on the object surface. Another key point of
autonomous manipulation we report on in this Thesis work is bi-manual
coordination. The execution of more advanced manipulation tasks in fact
might require the use and coordination of two arms. Tool usage for instance
often requires a proper in-hand object pose that can be obtained via
dual-arm re-grasping. In pick-and-place tasks sometimes the initial and
target position of the object do not belong to the same arm workspace, then
requiring to use one hand for lifting the object and the other for locating it
in the new position. At this regard, we implemented a pipeline for executing
the handover task, i.e. the sequences of actions for autonomously passing an
object from one robot hand on to the other.
The contributions described thus far address specific subproblems of
the more complex task of autonomous manipulation. This actually differs
from what humans do, in that humans develop their manipulation
skills by learning through experience and trial-and-error strategy. Aproper
mathematical formulation for encoding this learning approach is given by
Deep Reinforcement Learning, that has recently proved to be successful in
many robotics applications. For this reason, in this Thesis we report also
on the six month experience carried out at Berkeley Artificial Intelligence
Research laboratory with the goal of studying Deep Reinforcement Learning
and its application to autonomous manipulation
- âŠ