24 research outputs found

    Combining Shape Completion and Grasp Prediction for Fast and Versatile Grasping with a Multi-Fingered Hand

    Full text link
    Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem, especially when it comes to only partial observability and versatile grasping with multi-fingered hands. We present a novel, fast, and high fidelity deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape. The shape completion network is based on VQDIF and predicts spatial occupancy values at arbitrary query points. As grasp predictor, we use our two-stage architecture that first generates hand poses using an autoregressive model and then regresses finger joint configurations per pose. Critical factors turn out to be sufficient data realism and augmentation, as well as special attention to difficult cases during training. Experiments on a physical robot platform demonstrate successful grasping of a wide range of household objects based on a depth image from a single viewpoint. The whole pipeline is fast, taking only about 1 s for completing the object's shape (0.7 s) and generating 1000 grasps (0.3 s).Comment: 8 pages, 10 figures, 3 tables, 1 algorithm, 2023 IEEE-RAS International Conference on Humanoid Robots (Humanoids), Project page: https://dlr-alr.github.io/2023-humanoids-completio

    Probabilistic consolidation of grasp experience

    Get PDF
    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Grasp planning under uncertainty

    Get PDF
    Advanced robots such as mobile manipulators offer nowadays great opportunities for realistic manipulators. Physical interaction with the environment is an essential capability for service robots when acting in unstructured environments such as homes. Thus, manipulation and grasping under uncertainty has become a critical research area within robotics research. This thesis explores techniques for a robot to plan grasps in presence of uncertainty in knowledge about objects such as their pose and shape. First, the question how much information about the graspable object the robot can perceive from a single tactile exploration attempt is considered. Next, a tactile-based probabilistic approach for grasping which aims to maximize the probability of a successful grasp is presented. The approach is further extended to include information gathering actions based on maximal entropy reduction. The combined framework unifies ideas behind planning for maximally stable grasps, the possibilities of sensor-based grasping and exploration. Another line of research is focused on grasping familiar object belonging to a specific category. Moreover, the task is also included in the planning process as in many applications the resulting grasp should be not only stable but task compatible. The vision-based framework takes the idea of maximizing grasp stability in the novel context to cover shape uncertainty. Finally, the RGB-D vision-based probabilistic approach is extended to include tactile sensor feedback in the control loop to incrementally improve estimates about object shape and pose and then generate more stable task compatible grasps. The results of the studies demonstrate the benefits of applying probabilistic models and using different sensor measurements in grasp planning and prove that this is a promising direction of study and research. Development of such approaches, first of all, contributes to the rapidly developing area of household applications and service robotics
    corecore