133 research outputs found
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation
We develop an approach that benefits from large simulated datasets and takes
full advantage of the limited online data that is most relevant. We propose a
variant of Bayesian optimization that alternates between using informed and
uninformed kernels. With this Bernoulli Alternation Kernel we ensure that
discrepancies between simulation and reality do not hinder adapting robot
control policies online. The proposed approach is applied to a challenging
real-world problem of task-oriented grasping with novel objects. Our further
contribution is a neural network architecture and training pipeline that use
experience from grasping objects in simulation to learn grasp stability scores.
We learn task scores from a labeled dataset with a convolutional network, which
is used to construct an informed kernel for our variant of Bayesian
optimization. Experiments on an ABB Yumi robot with real sensor data
demonstrate success of our approach, despite the challenge of fulfilling task
requirements and high uncertainty over physical properties of objects.Comment: To appear in 2nd Conference on Robot Learning (CoRL) 201
Data-Driven Grasp Synthesis—A Survey
We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations
Grasp planning under uncertainty
Advanced robots such as mobile manipulators offer nowadays great opportunities for realistic manipulators. Physical interaction with the environment is an essential capability for service robots when acting in unstructured environments such as homes. Thus, manipulation and grasping under uncertainty has become a critical research area within robotics research.
This thesis explores techniques for a robot to plan grasps in presence of uncertainty in knowledge about objects such as their pose and shape. First, the question how much information about the graspable object the robot can perceive from a single tactile exploration attempt is considered. Next, a tactile-based probabilistic approach for grasping which aims to maximize the probability of a successful grasp is presented. The approach is further extended to include information gathering actions based on maximal entropy reduction. The combined framework unifies ideas behind planning for maximally stable grasps, the possibilities of sensor-based grasping and exploration.
Another line of research is focused on grasping familiar object belonging to a specific category. Moreover, the task is also included in the planning process as in many applications the resulting grasp should be not only stable but task compatible. The vision-based framework takes the idea of maximizing grasp stability in the novel context to cover shape uncertainty. Finally, the RGB-D vision-based probabilistic approach is extended to include tactile sensor feedback in the control loop to incrementally improve estimates about object shape and pose and then generate more stable task compatible grasps.
The results of the studies demonstrate the benefits of applying probabilistic models and using different sensor measurements in grasp planning and prove that this is a promising direction of study and research. Development of such approaches, first of all, contributes to the rapidly developing area of household applications and service robotics
- …