331 research outputs found
CASSL: Curriculum Accelerated Self-Supervised Learning
Recent self-supervised learning approaches focus on using a few thousand data
points to learn policies for high-level, low-dimensional action spaces.
However, scaling this framework for high-dimensional control require either
scaling up the data collection efforts or using a clever sampling strategy for
training. We present a novel approach - Curriculum Accelerated Self-Supervised
Learning (CASSL) - to train policies that map visual information to high-level,
higher- dimensional action spaces. CASSL orders the sampling of training data
based on control dimensions: the learning and sampling are focused on few
control parameters before other parameters. The right curriculum for learning
is suggested by variance-based global sensitivity analysis of the control
space. We apply our CASSL framework to learning how to grasp using an adaptive,
underactuated multi-fingered gripper, a challenging system to control. Our
experimental results indicate that CASSL provides significant improvement and
generalization compared to baseline methods such as staged curriculum learning
(8% increase) and complete end-to-end learning with random exploration (14%
improvement) tested on a set of novel objects
DMFC-GraspNet: Differentiable Multi-Fingered Robotic Grasp Generation in Cluttered Scenes
Robotic grasping is a fundamental skill required for object manipulation in
robotics. Multi-fingered robotic hands, which mimic the structure of the human
hand, can potentially perform complex object manipulation. Nevertheless,
current techniques for multi-fingered robotic grasping frequently predict only
a single grasp for each inference time, limiting computational efficiency and
their versatility, i.e. unimodal grasp distribution. This paper proposes a
differentiable multi-fingered grasp generation network (DMFC-GraspNet) with
three main contributions to address this challenge. Firstly, a novel neural
grasp planner is proposed, which predicts a new grasp representation to enable
versatile and dense grasp predictions. Secondly, a scene creation and label
mapping method is developed for dense labeling of multi-fingered robotic hands,
which allows a dense association of ground truth grasps. Thirdly, we propose to
train DMFC-GraspNet end-to-end using using a forward-backward automatic
differentiation approach with both a supervised loss and a differentiable
collision loss and a generalized Q 1 grasp metric loss. The proposed approach
is evaluated using the Shadow Dexterous Hand on Mujoco simulation and ablated
by different choices of loss functions. The results demonstrate the
effectiveness of the proposed approach in predicting versatile and dense
grasps, and in advancing the field of multi-fingered robotic grasping.Comment: Submitted IROS 2023 workshop "Policy Learning in Geometric Spaces
Combining Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered Hand
Grasping objects with limited or no prior knowledge about them is a highly
relevant skill in assistive robotics. Still, in this general setting, it has
remained an open problem, especially when it comes to only partial
observability and versatile grasping with multi-fingered hands. We present a
novel, fast, and high fidelity deep learning pipeline consisting of a shape
completion module that is based on a single depth image, and followed by a
grasp predictor that is based on the predicted object shape. The shape
completion network is based on VQDIF and predicts spatial occupancy values at
arbitrary query points. As grasp predictor, we use our two-stage architecture
that first generates hand poses using an autoregressive model and then
regresses finger joint configurations per pose. Critical factors turn out to be
sufficient data realism and augmentation, as well as special attention to
difficult cases during training. Experiments on a physical robot platform
demonstrate successful grasping of a wide range of household objects based on a
depth image from a single viewpoint. The whole pipeline is fast, taking only
about 1 s for completing the object's shape (0.7 s) and generating 1000 grasps
(0.3 s).Comment: 8 pages, 10 figures, 3 tables, 1 algorithm, 2023 IEEE-RAS
International Conference on Humanoid Robots (Humanoids), Project page:
https://dlr-alr.github.io/2023-humanoids-completio
Deep Learning Approaches to Grasp Synthesis: A Review
Grasping is the process of picking up an object by applying forces and torques at a set of contacts. Recent advances in deep learning methods have allowed rapid progress in robotic object grasping. In this systematic review, we surveyed the publications over the last decade, with a particular interest in grasping an object using all six degrees of freedom of the end-effector pose. Our review found four common methodologies for robotic grasping: sampling-based approaches, direct regression, reinforcement learning, and exemplar approaches In addition, we found two “supporting methods” around grasping that use deep learning to support the grasping process, shape approximation, and affordances. We have distilled the publications found in this systematic review (85 papers) into ten key takeaways we consider crucial for future robotic grasping and manipulation research
- …