1,737 research outputs found
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Open World Assistive Grasping Using Laser Selection
Many people with motor disabilities are unable to complete activities of
daily living (ADLs) without assistance. This paper describes a complete robotic
system developed to provide mobile grasping assistance for ADLs. The system is
comprised of a robot arm from a Rethink Robotics Baxter robot mounted to an
assistive mobility device, a control system for that arm, and a user interface
with a variety of access methods for selecting desired objects. The system uses
grasp detection to allow previously unseen objects to be picked up by the
system. The grasp detection algorithms also allow for objects to be grasped in
cluttered environments. We evaluate our system in a number of experiments on a
large variety of objects. Overall, we achieve an object selection success rate
of 88% and a grasp detection success rate of 90% in a non-mobile scenario, and
success rates of 89% and 72% in a mobile scenario
Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images
This paper presents an efficient neural network model to generate robotic
grasps with high resolution images. The proposed model uses fully convolution
neural network to generate robotic grasps for each pixel using 400 400
high resolution RGB-D images. It first down-sample the images to get features
and then up-sample those features to the original size of the input as well as
combines local and global features from different feature maps. Compared to
other regression or classification methods for detecting robotic grasps, our
method looks more like the segmentation methods which solves the problem
through pixel-wise ways. We use Cornell Grasp Dataset to train and evaluate the
model and get high accuracy about 94.42% for image-wise and 91.02% for
object-wise and fast prediction time about 8ms. We also demonstrate that
without training on the multiple objects dataset, our model can directly output
robotic grasps candidates for different objects because of the pixel wise
implementation.Comment: Submitted to ROBIO 201
- …