309 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Grasp Planning for a Humanoid Hand

    Get PDF

    Grasping Points Determination Using Visual Features

    Get PDF
    This paper discusses some issues for generating point of contact using visual features. To address these issues, the paper is divided into two sections: visual features extraction and grasp planning. In order to provide a suitable description of object contour, a method for grouping visual features is proposed. A very important aspect of this method is the wa

    Dynamic Grasping of Unknown Objects with a Multi-Fingered Hand

    Full text link
    An important prerequisite for autonomous robots is their ability to reliably grasp a wide variety of objects. Most state-of-the-art systems employ specialized or simple end-effectors, such as two-jaw grippers, which severely limit the range of objects to manipulate. Additionally, they conventionally require a structured and fully predictable environment while the vast majority of our world is complex, unstructured, and dynamic. This paper presents an implementation to overcome both issues. Firstly, the integration of a five-finger hand enhances the variety of possible grasps and manipulable objects. This kinematically complex end-effector is controlled by a deep learning based generative grasping network. The required virtual model of the unknown target object is iteratively completed by processing visual sensor data. Secondly, this visual feedback is employed to realize closed-loop servo control which compensates for external disturbances. Our experiments on real hardware confirm the system's capability to reliably grasp unknown dynamic target objects without a priori knowledge of their trajectories. To the best of our knowledge, this is the first method to achieve dynamic multi-fingered grasping for unknown objects. A video of the experiments is available at https://youtu.be/Ut28yM1gnvI.Comment: ICRA202

    A robotic engine assembly pick-place system based on machine learning

    Get PDF
    Industrial revolution brought humans and machines together in building a better future. Where in one hand there is need to replace the repetitive jobs with machines to increase efficiency and volume of production, on the other hand intelligent and autonomous machines have still a long way to go to achieve dexterity of a human. The current scenario requires a system which can utilise best of both the human and the machine. This thesis studies a industrial use case scenario where human-machine combine their skills to build an autonomous pick place system. This study takes a small step towards the human-robot consortium primarily focusing on developing a vision based system for object detection followed by a manipulator pick place operation. This thesis can be divided into two parts : 1. Scene analysis, where a Convolutional Neural Network (CNN) is used for object detection followed by generation of grasping points using object edge image and an algorithm developed during this thesis. 2. Implementation, it focuses on motion generation while taking care of external disturbances to perform successful pick-place operation. In addition human involvement is required which includes teaching trajectory points for the robot to follow. This trajectory is used to generate image data-set for a new object type and thereafter generating new object detection model. The author primarily focuses on building a system framework where the complexities related to robot programming such as generating trajectory points and informing grasping position is not required. The system automatically detects object and performs a pick place operation, resulting in relieving user from robot programming. The system is composed of a depth camera and a manipulator. Camera is the only sensor available for scene analysis and the action is performed using a Franka manipulator. The two components work in request-response mode over ROS. This thesis introduces a newer approaches such as, dividing an workspace image into its constituent object images and performing object detection, creating training data, generating grasp points based on object shape along length of an object. The thesis also presents a case study where three different objects are chosen as test objects. The experiments are a demonstration of the methods applied and efficiency attained. The case study also provides a glimpse of the future research and development areas
    • …
    corecore