324 research outputs found
Optimization Model for Planning Precision Grasps with Multi-Fingered Hands
Precision grasps with multi-fingered hands are important for precise
placement and in-hand manipulation tasks. Searching precision grasps on the
object represented by point cloud, is challenging due to the complex object
shape, high-dimensionality, collision and undesired properties of the sensing
and positioning. This paper proposes an optimization model to search for
precision grasps with multi-fingered hands. The model takes noisy point cloud
of the object as input and optimizes the grasp quality by iteratively searching
for the palm pose and finger joints positions. The collision between the hand
and the object is approximated and penalized by a series of least-squares. The
collision approximation is able to handle the point cloud representation of the
objects with complex shapes. The proposed optimization model is able to locate
collision-free optimal precision grasps efficiently. The average computation
time is 0.50 sec/grasp. The searching is robust to the incompleteness and noise
of the point cloud. The effectiveness of the algorithm is demonstrated by
experiments.Comment: Submitted to IROS2019, experiment on BarrettHand, 8 page
Multi-FinGAN: generative coarse-to-fine sampling of multi-finger grasps
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksWhile there exists many methods for manipulating rigid objects with parallel-jaw grippers, grasping with multi- finger robotic hands remains a quite unexplored research topic. Reasoning and planning collision-free trajectories on the additional degrees of freedom of several fingers represents an important challenge that, so far, involves computationally costly and slow processes. In this work, we present Multi-FinGAN, a fast generative multi-finger grasp sampling method that synthesizes high quality grasps directly from RGB-D images in about a second. We achieve this by training in an end-to-end fashion a coarse-to-fine model composed of a classification network that distinguishes grasp types according to a specific taxonomy and a refinement network that produces refined grasp poses and joint angles. We experimentally validate and benchmark our method against a standard grasp-sampling method on 790 grasps in simulation and 20 grasps on a real Franka Emika Panda. All experimental results using our method show consistent improvements both in terms of grasp quality metrics and grasp success rate. Remarkably, our approach is up to 20-30 times faster than the baseline, a significant improvement that opens the door to feedback-based grasp re-planning and task informative grasping. Code is available at https://irobotics.aalto.fi/multi-fingan/.Peer ReviewedPostprint (author's final draft
Combining Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered Hand
Grasping objects with limited or no prior knowledge about them is a highly
relevant skill in assistive robotics. Still, in this general setting, it has
remained an open problem, especially when it comes to only partial
observability and versatile grasping with multi-fingered hands. We present a
novel, fast, and high fidelity deep learning pipeline consisting of a shape
completion module that is based on a single depth image, and followed by a
grasp predictor that is based on the predicted object shape. The shape
completion network is based on VQDIF and predicts spatial occupancy values at
arbitrary query points. As grasp predictor, we use our two-stage architecture
that first generates hand poses using an autoregressive model and then
regresses finger joint configurations per pose. Critical factors turn out to be
sufficient data realism and augmentation, as well as special attention to
difficult cases during training. Experiments on a physical robot platform
demonstrate successful grasping of a wide range of household objects based on a
depth image from a single viewpoint. The whole pipeline is fast, taking only
about 1 s for completing the object's shape (0.7 s) and generating 1000 grasps
(0.3 s).Comment: 8 pages, 10 figures, 3 tables, 1 algorithm, 2023 IEEE-RAS
International Conference on Humanoid Robots (Humanoids), Project page:
https://dlr-alr.github.io/2023-humanoids-completio
Recommended from our members
Learning To Grasp
Providing robots with the ability to grasp objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the scene and objects that will be manipulated. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, heuristic and simple rule based strategies were used to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. With these assumptions in place, it becomes tractable for a roboticist to hardcode desired behaviour and build a robotic system capable of completing repetitive tasks. These hardcoded behaviours will quickly fail if the assumptions about the environment are invalidated. In this thesis we will demonstrate how a robust grasping system can be built that is capable of operating under a more variable set of conditions without requiring significant engineering of behavior by a roboticist.
This robustness is enabled by a new found ability to empower novel machine learning techniques with massive amounts of synthetic training data. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping related tasks. The use of simulation allows for the creation of a wide variety of environments and experiences exposing the robotic system to a large number of scenarios before ever operating in the real world. This thesis demonstrates that it is now possible to build systems that work in the real world trained using deep learning on synthetic data. The sheer volume of data that can be produced via simulation enables the use of powerful deep learning techniques whose performance scales with the amount of data available. This thesis will explore how deep learning and other techniques can be used to encode these massive datasets for efficient runtime use. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in a large number of environments. Creative applications of machine learning and massive synthetic datasets are allowing robotic systems to learn skills, and move beyond repetitive hardcoded tasks
- …