9 research outputs found

    Multiple-object Grasping Using a Multiple-suction-cup Vacuum Gripper in Cluttered Scenes

    Full text link
    Multiple-suction-cup grasping can improve the efficiency of bin picking in cluttered scenes. In this paper, we propose a grasp planner for a vacuum gripper to use multiple suction cups to simultaneously grasp multiple objects or an object with a large surface. To take on the challenge of determining where to grasp and which cups to activate when grasping, we used 3D convolution to convolve the affordable areas inferred by neural network with the gripper kernel in order to find graspable positions of sampled gripper orientations. The kernel used for 3D convolution in this work was encoded including cup ID information, which helps to directly determine which cups to activate by decoding the convolution results. Furthermore, a sorting algorithm is proposed to find the optimal grasp among the candidates. Our planner exhibited good generality and successfully found multiple-cup grasps in previous affordance map datasets. Our planner also exhibited improved picking efficiency using multiple suction cups in physical robot picking experiments. Compared with single-object (single-cup) grasping, multiple-cup grasping contributed to 1.45x, 1.65x, and 1.16x increases in efficiency for picking boxes, fruits, and daily necessities, respectively

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Data-Driven Grasp Synthesis—A Survey

    Get PDF
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations

    Simulation-based functional evaluation of anthropomorphic artificial hands.

    Get PDF
    This thesis proposes an outline for a framework for an evaluation method that takes as an input a model of an artificial hand, which claims to be anthropomorphic, and produces as output the set of tasks that the hand can perform. The framework is based on studying the literature on the anatomy and functionalities of the human hand and methods of implementing these functionalities in artificial systems. The thesis also presents a partial implementation of the framework which focuses on tasks of gesturing and grasping using anthropomorphic postures. This thesis focuses on the evaluation of the intrinsic hardware of robot hands from technical and functional perspectives, including kinematics of the mechanical structure, geometry of the contact surface, and functional force conditions for successful grasps. This thesis does not consider topics related to control or elements of aesthetics of the design of robot hands.The thesis reviews the literature on the anatomy, motion and sensory capabilities, and functionalities of the human hand to define a reference to evaluate artificial hands. It distinguishes between the hand's construction and functionalities and presents a discussion of anthropomorphism that reflects this distinction. It reviews key theory related to artificial hands and notable solutions and existing methods of evaluating artificial hands.The thesis outlines the evaluation framework by defining the action manifold of the anthropomorphic hand, defined as the set of all tasks that a hypothetical ideal anthropomorphic hand should be able to do, and analysing the manifold tasks to determine the hand capabilities involved in the tasks and how to simulate them. A syntax is defined to describe hand tasks and anthropomorphic postures. The action manifold is defined to be used as a. functional reference to evaluate artificial hands' performance.A method to evaluate anthropomorphic postures using Fuzzy logic and a method to evaluate anthropomorphic grasping abilities are proposed and applied on models of the human hand and the InMoov robot hand. The results show the methods' ability to detect successful postures and grasps. Future work towards a full implementation of the framework is suggested

    Improved Deep Neural Networks for Generative Robotic Grasping

    Get PDF
    This thesis provides a thorough evaluation of current state-of-the-art robotic grasping methods and contributes to a subset of data-driven grasp estimation approaches, termed generative models. These models aim to directly generate grasp region proposals from a given image without the need for a separate analysis and ranking step, which can be computationally expensive. This approach allows for fully end-to-end training of a model and quick closed-loop operation of a robot arm. A number of limitations are identified within these generative models, which are identified and addressed. Contributions are proposed that directly target each stage of the training pipeline that help to form accurate grasp proposals and generalise better to unseen objects. Firstly, inspired by theories of object manipulation within the mammalian visual system, the use of multi-task learning in existing generative architectures is evaluated. This aims to improve the performance of grasping algorithms when presented with impoverished colour (RGB) data by training models to perform simultaneous tasks such as object categorisation, saliency detection, and depth reconstruction. Secondly, a novel loss function is introduced which improves overall performance by rewarding the network to focus only on learning grasps at suitable positions. This reduces overall training times and results in better performance on fewer training examples. The last contribution analyses the problems with the most common metric used for evaluating and comparing offline performance between different grasping models and algorithms. To this end, a Gaussian method of representing ground-truth labelled grasps is put forward, which optimal grasp locations tested in a simulated grasping environment. The combination of these novel additions to generative models results in improved grasp success, accuracy, and performance on common benchmark datasets compared to previous approaches. Furthermore, the efficacy of these contributions is also tested when transferred to a physical robotic arm, demonstrating the ability to effectively grasp previously unseen 3D printed objects of varying complexity and difficulty without the need for domain adaptation. Finally, the future directions are discussed for generative convolutional models within the overall field of robotic grasping

    Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning

    Get PDF
    Dexterous manipulation, especially dexterous grasping, is a primitive and crucial ability of robots that allows the implementation of performing human-like behaviors. Deploying the ability on robots enables them to assist and substitute human to accomplish more complex tasks in daily life and industrial production. A comprehensive review of the methods based on point cloud and deep learning for robotics dexterous grasping from three perspectives is given in this paper. As a new category schemes of the mainstream methods, the proposed generation-evaluation framework is the core concept of the classification. The other two classifications based on learning modes and applications are also briefly described afterwards. This review aims to afford a guideline for robotics dexterous grasping researchers and developers

    Robotic object manipulation via hierarchical and affordance learning

    Get PDF
    With the rise of computation power and machine learning techniques, a shift of research interest is happening to roboticists. Against this background, this thesis seeks to develop or enhance learning-based grasping and manipulation systems. This thesis first proposes a method, named A2, to improve the sample efficiency of end-to-end deep reinforcement learning algorithms for long horizon, multi-step and sparse reward manipulation. The named A2 comes from the fact that it uses Abstract demonstrations to guide the learning process and Adaptively adjusts exploration according to online performances. Experiments in a series of multi-step grid world tasks and manipulation tasks demonstrate significant performance gains over baselines. Then, this thesis develops a hierarchical reinforcement learning approach towards solving the long-horizon manipulation tasks. Specifically, the proposed universal option framework integrates the knowledge-sharing advantage of goal-conditioned reinforcement learning into hierarchical reinforcement learning. An analysis of the parallel training non-stationarity problem is also conducted, and the A2 method is employed to address the issue. Experiments in a series of continuous multi-step, multi-outcome block stacking tasks demonstrate significant performance gains as well as reductions of memory and repeated computation over baselines. Finally, this thesis studies the interplay between grasp generation and manipulation motion generation, arguing that selecting a good grasp before manipulation is essential for contact-rich manipulation tasks. A theory of general affordances based on the reinforcement learning paradigm is developed and used to represent the relationship between grasp generation and manipulation performances. This leads to the general affordance-aware manipulation framework, which selects task-agnostic grasps for downstream manipulation based on the predicted manipulation performances. Experiments on a series of contact-rich hook separation tasks prove the effectiveness of the proposed framework and showcase significant performance gains by filtering away unsatisfactory grasps
    corecore