19 research outputs found

    Visual Grasping of Unknown Objects

    Get PDF
    The objective of the thesis is to compare and study recent visual grasping techniques which are applied on a robotic arm for grasping of unknown objects in an indoor environment. The novelty of the thesis is that the study has led to questioning the general approach used by researchers to solve the grasping problem. The result can help future researchers in investing more on the problem areas of grasping techniques and can also lead us to question ourselves on the approach we are using to solve the grasping problem

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Simple Kinesthetic Haptics for Object Recognition

    Full text link
    Object recognition is an essential capability when performing various tasks. Humans naturally use either or both visual and tactile perception to extract object class and properties. Typical approaches for robots, however, require complex visual systems or multiple high-density tactile sensors which can be highly expensive. In addition, they usually require actual collection of a large dataset from real objects through direct interaction. In this paper, we propose a kinesthetic-based object recognition method that can be performed with any multi-fingered robotic hand in which the kinematics is known. The method does not require tactile sensors and is based on observing grasps of the objects. We utilize a unique and frame invariant parameterization of grasps to learn instances of object shapes. To train a classifier, training data is generated rapidly and solely in a computational process without interaction with real objects. We then propose and compare between two iterative algorithms that can integrate any trained classifier. The classifiers and algorithms are independent of any particular robot hand and, therefore, can be exerted on various ones. We show in experiments, that with few grasps, the algorithms acquire accurate classification. Furthermore, we show that the object recognition approach is scalable to objects of various sizes. Similarly, a global classifier is trained to identify general geometries (e.g., an ellipsoid or a box) rather than particular ones and demonstrated on a large set of objects. Full scale experiments and analysis are provided to show the performance of the method

    Example-based grasp adaptation

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 60-63).Finding a way to provide intelligent humanoid robots with the ability to grasp objects has been a question of great interest. Most approaches, however, assume that objects are composed of primitive shapes such as box, sphere, and cylinder. In the thesis, we explore an efficient and robust method to decide grasps given new objects that are irregularly-shaped (3D polygon meshes). To solve the problem, we use an example-based approach. We first find grasps for objects geometrically similar to those the system has seen before. For example, if the system has been shown a cup being grasped by the handle, it should now be able to grasp any new cup. There are two problems to be solved in order to adapt example grasps to the new object. First, the system should be able to retrieve objects that are geometrically similar to the given object from the database storing previously seen objects. After collecting objects the system knows how to grasp, it needs to adapt example grasps to new object.Already, there are some working algorithms for the first problem (shape retrieval). Therefore, our main contribution is to present an algorithm that performs grasp adaptation. Before we adapt a grasp, we first find the geometric correspondence between a demo object and new object using probabilistic graphical model. Based on correlation information together with the demo grasp, we generate a grasp for the new object. To ensure that a robot can effectively grasp the object, we adjust the position of grasp contacts until the quality of the grasp is reasonably high. In test cases, the system successfully uses this method to find the correspondence between objects and adapt demo grasps.by Jiwon Kim.M.Eng

    Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time

    Get PDF
    Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust. More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression. Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object

    Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time

    Get PDF
    Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust. More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression. Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object

    A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots

    Get PDF
    Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions

    Extraction automatique d'indices géométriques pour la préhension d'outils en ergonomie virtuelle

    Get PDF
    DELMIA est une brand de Dassault Systèmes spécialisé dans la simulation des procédés industriels. Ce module permet notamment la modélisation de tâches de travail dans des environnements manufacturiers simulés en 3D afin d’en analyser l’ergonomie. Cependant, la manipulation du mannequin virtuel se fait manuellement par des utilisateurs experts du domaine. Afin de démocratiser l’accès à l’ergonomie virtuelle, Dassault Systèmes a lancé un programme visant à positionner automatiquement le mannequin au sein de la maquette virtuelle à l’aide d’un nouveau moteur de positionnement nommé « Smart Posturing Engine (SPE) ». Le placement automatique des mains sur des outils constitue un des enjeux de ce projet. L’objectif général poursuivi dans ce mémoire consiste à proposer une méthode d’extraction automatique d’indices de préhension, servant de guide pour la saisie des outils, à partir de leurs modèles géométriques tridimensionnels. Cette méthode est basée sur l’affordance naturelle des outils disponibles de manière usuelle dans un environnement manufacturier. La méthode empirique présentée dans cette étude s’intéresse donc aux outils usuels tenus à une seule main. La méthode suppose que l’appartenance à une famille (maillets, pinces, etc.) de l’outil à analyser est initialement connue, ce qui permet de présumer de l’affordance de la géométrie à analyser. La méthode proposée comporte plusieurs étapes. Dans un premier temps, un balayage est mené sur la géométrie 3D de l’outil afin d’en extraire une série de sections. Des propriétés sont alors extraites pour chaque section de manière à reconstruire un modèle d’étude simplifié. Basé sur les variations des propriétés, l’outil est segmenté successivement en tronçons, segments et régions. Des indices de préhension sont finalement extraits des régions identifiées, y compris la tête de l'outil qui fournit une direction de travail liée à la tâche, de même que le manche ou la gâchette, le cas échéant. Ces indices de préhension sont finalement transmis au SPE afin de générer des préhensions orientées tâches. La solution proposée a été testée sur une cinquantaine d’outils tenus à une main appartenant aux familles des maillets, tournevis, pinces, visseuses droites et visseuses pistolets. Les modèles 3D des outils ont été récupérés du site en ligne « Part Supply » de Dassault Systèmes. La méthode proposée devrait être aisément transposable à d’autres familles d’outils
    corecore