405 research outputs found
Densely Supervised Grasp Detector (DSGD)
This paper presents Densely Supervised Grasp Detector (DSGD), a deep learning
framework which combines CNN structures with layer-wise feature fusion and
produces grasps and their confidence scores at different levels of the image
hierarchy (i.e., global-, region-, and pixel-levels). % Specifically, at the
global-level, DSGD uses the entire image information to predict a grasp. At the
region-level, DSGD uses a region proposal network to identify salient regions
in the image and predicts a grasp for each salient region. At the pixel-level,
DSGD uses a fully convolutional network and predicts a grasp and its confidence
at every pixel. % During inference, DSGD selects the most confident grasp as
the output. This selection from hierarchically generated grasp candidates
overcomes limitations of the individual models. % DSGD outperforms
state-of-the-art methods on the Cornell grasp dataset in terms of grasp
accuracy. % Evaluation on a multi-object dataset and real-world robotic
grasping experiments show that DSGD produces highly stable grasps on a set of
unseen objects in new environments. It achieves 97% grasp detection accuracy
and 90% robotic grasping success rate with real-time inference speed
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Real-Time Grasp Detection Using Convolutional Neural Networks
We present an accurate, real-time approach to robotic grasp detection based
on convolutional neural networks. Our network performs single-stage regression
to graspable bounding boxes without using standard sliding window or region
proposal techniques. The model outperforms state-of-the-art approaches by 14
percentage points and runs at 13 frames per second on a GPU. Our network can
simultaneously perform classification so that in a single step it recognizes
the object and finds a good grasp rectangle. A modification to this model
predicts multiple grasps per object by using a locally constrained prediction
mechanism. The locally constrained model performs significantly better,
especially on objects that can be grasped in a variety of ways.Comment: Accepted to ICRA 201
Review of deep learning methods in robotic grasp detection
For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed
- …