8,039 research outputs found
On-Policy Pixel-Level Grasping Across the Gap Between Simulation and Reality
Grasp detection in cluttered scenes is a very challenging task for robots.
Generating synthetic grasping data is a popular way to train and test grasp
methods, as is Dex-net and GraspNet; yet, these methods generate training
grasps on 3D synthetic object models, but evaluate at images or point clouds
with different distributions, which reduces performance on real scenes due to
sparse grasp labels and covariate shift. To solve existing problems, we propose
a novel on-policy grasp detection method, which can train and test on the same
distribution with dense pixel-level grasp labels generated on RGB-D images. A
Parallel-Depth Grasp Generation (PDG-Generation) method is proposed to generate
a parallel depth image through a new imaging model of projecting points in
parallel; then this method generates multiple candidate grasps for each pixel
and obtains robust grasps through flatness detection, force-closure metric and
collision detection. Then, a large comprehensive Pixel-Level Grasp Pose Dataset
(PLGP-Dataset) is constructed and released; distinguished with previous
datasets with off-policy data and sparse grasp samples, this dataset is the
first pixel-level grasp dataset, with the on-policy distribution where grasps
are generated based on depth images. Lastly, we build and test a series of
pixel-level grasp detection networks with a data augmentation process for
imbalance training, which learn grasp poses in a decoupled manner on the input
RGB-D images. Extensive experiments show that our on-policy grasp method can
largely overcome the gap between simulation and reality, and achieves the
state-of-the-art performance. Code and data are provided at
https://github.com/liuchunsense/PLGP-Dataset
Grasping unknown objects in clutter by superquadric representation
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 robot and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy.Peer ReviewedPostprint (author's final draft
Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
Camera viewpoint selection is an important aspect of visual grasp detection,
especially in clutter where many occlusions are present. Where other approaches
use a static camera position or fixed data collection routines, our Multi-View
Picking (MVP) controller uses an active perception approach to choose
informative viewpoints based directly on a distribution of grasp pose estimates
in real time, reducing uncertainty in the grasp poses caused by clutter and
occlusions. In trials of grasping 20 objects from clutter, our MVP controller
achieves 80% grasp success, outperforming a single-viewpoint grasp detector by
12%. We also show that our approach is both more accurate and more efficient
than approaches which consider multiple fixed viewpoints.Comment: ICRA 2019 Video: https://youtu.be/Vn3vSPKlaEk Code:
https://github.com/dougsm/mvp_gras
Autonomous Sweet Pepper Harvesting for Protected Cropping Systems
In this letter, we present a new robotic harvester (Harvey) that can
autonomously harvest sweet pepper in protected cropping environments. Our
approach combines effective vision algorithms with a novel end-effector design
to enable successful harvesting of sweet peppers. Initial field trials in
protected cropping environments, with two cultivar, demonstrate the efficacy of
this approach achieving a 46% success rate for unmodified crop, and 58% for
modified crop. Furthermore, for the more favourable cultivar we were also able
to detach 90% of sweet peppers, indicating that improvements in the grasping
success rate would result in greatly improved harvesting performance
- …