185 research outputs found

    Grasping bulky objects with two anthropomorphic hands

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThis paper presents an algorithm to compute precision grasps for bulky objects using two anthropomorphic hands. We use objects modeled as point clouds obtained from a sensor camera or from a CAD model. We then process the point clouds dividing them into two set of slices where we look for sets of triplets of points. Each triplet must accomplish some physical conditions based on the structure of the hands. Then, the triplets of points from each set of slices are evaluated to find a combination that satisfies the force closure condition (FC). Once one valid couple of triplets have been found the inverse kinematics of the system is computed in order to know if the corresponding points are reachable by the hands, if so, motion planning and a collision check are performed to asses if the final grasp configuration of the system is suitable. The paper inclu des some application examples of the proposed approachAccepted versio

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Fast Grasp Planning Using Cord Geometry

    Get PDF
    International audienceIn this paper, we propose a novel idea to address theproblem of fast computation of stable force-closure grasp configurationsfor a multifingered hand and a 3-D rigid object representedas a polygonal soup model. The proposed method performsa low-level shape exploration by wrapping multiple cords aroundthe object in order to quickly isolate promising grasping regions.Around these regions, we compute grasp configurations by applyinga variant of the close-until-contact procedure to find thecontact points. The finger kinematics and the contact informationare then used to filter out unstable grasps. Through many simulatedexamples with three different anthropomorphic hands, wedemonstrate that, compared with previous grasp planners such asthe generic grasp planner in Simox, the proposed grasp plannercan synthesize grasps that are more natural-looking for humans(as measured by the grasp quality measure skewness) for objectswith complex geometries in a short amount of time. Unlike manyother planners, this is achieved without costly model preprocessingsuch as segmentation by parts and medial axis extraction

    Probabilistic consolidation of grasp experience

    Get PDF
    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases

    Data-Driven Grasp Synthesis—A Survey

    Get PDF
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations

    Autonomous task-based grasping for mobile manipulators

    Get PDF
    A fully integrated grasping system for a mobile manipulator to grasp an unknown object of interest (OI) in an unknown environment is presented. The system autonomously scans its environment, models the OI, plans and executes a grasp, while taking into account base pose uncertainty and obstacles in its way to reach the object. Due to inherent line of sight limitations in sensing, a single scan of the OI often does not reveal enough information to complete grasp analysis; as a result, our system autonomously builds a model of an object via multiple scans from different locations until a grasp can be performed. A volumetric next-best-view (NBV) algorithm is used to model an arbitrary object and terminates modelling when grasp poses are discovered on a partially observed object. Two key sets of experiments are presented: i) modelling and registration error in the OI point cloud model is reduced by selecting viewpoints with more scan overlap, and ii) model construction and grasps are successfully achieved while experiencing base pose uncertainty. A generalized algorithm is presented to discover grasp pose solutions for multiple grasp types for a multi-fingered mechanical gripper using sensed point clouds. The algorithm introduces two key ideas: 1) a histogram of finger contact normals is used to represent a grasp “shape” to guide a gripper orientation search in a histogram of object(s) surface normals, and 2) voxel grid representations of gripper and object(s) are cross-correlated to match finger contact points, i.e. grasp “size”, to discover a grasp pose. Constraints, such as collisions with neighbouring objects, are incorporated in the cross-correlation computation. Simulations and preliminary experiments show that 1) grasp poses for three grasp types are found in near real-time, 2) grasp pose solutions are consistent with respect to voxel resolution changes for both partial and complete point cloud scans, 3) a planned grasp pose is executed with a mechanical gripper, and 4) grasp overlap is presented as a feature to identify regions on a partial object model ideal for object transfer or securing an object

    Improved Deep Neural Networks for Generative Robotic Grasping

    Get PDF
    This thesis provides a thorough evaluation of current state-of-the-art robotic grasping methods and contributes to a subset of data-driven grasp estimation approaches, termed generative models. These models aim to directly generate grasp region proposals from a given image without the need for a separate analysis and ranking step, which can be computationally expensive. This approach allows for fully end-to-end training of a model and quick closed-loop operation of a robot arm. A number of limitations are identified within these generative models, which are identified and addressed. Contributions are proposed that directly target each stage of the training pipeline that help to form accurate grasp proposals and generalise better to unseen objects. Firstly, inspired by theories of object manipulation within the mammalian visual system, the use of multi-task learning in existing generative architectures is evaluated. This aims to improve the performance of grasping algorithms when presented with impoverished colour (RGB) data by training models to perform simultaneous tasks such as object categorisation, saliency detection, and depth reconstruction. Secondly, a novel loss function is introduced which improves overall performance by rewarding the network to focus only on learning grasps at suitable positions. This reduces overall training times and results in better performance on fewer training examples. The last contribution analyses the problems with the most common metric used for evaluating and comparing offline performance between different grasping models and algorithms. To this end, a Gaussian method of representing ground-truth labelled grasps is put forward, which optimal grasp locations tested in a simulated grasping environment. The combination of these novel additions to generative models results in improved grasp success, accuracy, and performance on common benchmark datasets compared to previous approaches. Furthermore, the efficacy of these contributions is also tested when transferred to a physical robotic arm, demonstrating the ability to effectively grasp previously unseen 3D printed objects of varying complexity and difficulty without the need for domain adaptation. Finally, the future directions are discussed for generative convolutional models within the overall field of robotic grasping
    • …
    corecore