10 research outputs found

    Robotic Ironing with 3D Perception and Force/Torque Feedback in Household Environments

    Full text link
    As robotic systems become more popular in household environments, the complexity of required tasks also increases. In this work we focus on a domestic chore deemed dull by a majority of the population, the task of ironing. The presented algorithm improves on the limited number of previous works by joining 3D perception with force/torque sensing, with emphasis on finding a practical solution with a feasible implementation in a domestic setting. Our algorithm obtains a point cloud representation of the working environment. From this point cloud, the garment is segmented and a custom Wrinkleness Local Descriptor (WiLD) is computed to determine the location of the present wrinkles. Using this descriptor, the most suitable ironing path is computed and, based on it, the manipulation algorithm performs the force-controlled ironing operation. Experiments have been performed with a humanoid robot platform, proving that our algorithm is able to detect successfully wrinkles present in garments and iteratively reduce the wrinkleness using an unmodified iron.Comment: Accepted and to be published on the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) that will be held in Vancouver, Canada, September 24-28, 201

    Cumulative object categorization in clutter

    Get PDF
    In this paper we present an approach based on scene- or part-graphs for geometrically categorizing touching and occluded objects. We use additive RGBD feature descriptors and hashing of graph configuration parameters for describing the spatial arrangement of constituent parts. The presented experiments quantify that this method outperforms our earlier part-voting and sliding window classification. We evaluated our approach on cluttered scenes, and by using a 3D dataset containing over 15000 Kinect scans of over 100 objects which were grouped into general geometric categories. Additionally, color, geometric, and combined features were compared for categorization tasks

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Robust Scene Estimation for Goal-directed Robotic Manipulation in Unstructured Environments

    Full text link
    To make autonomous robots "taskable" so that they function properly and interact fluently with human partners, they must be able to perceive and understand the semantic aspects of their environments. More specifically, they must know what objects exist and where they are in the unstructured human world. Progresses in robot perception, especially in deep learning, have greatly improved for detecting and localizing objects. However, it still remains a challenge for robots to perform a highly reliable scene estimation in unstructured environments that is determined by robustness, adaptability and scale. In this dissertation, we address the scene estimation problem under uncertainty, especially in unstructured environments. We enable robots to build a reliable object-oriented representation that describes objects present in the environment, as well as inter-object spatial relations. Specifically, we focus on addressing following challenges for reliable scene estimation: 1) robust perception under uncertainty results from noisy sensors, objects in clutter and perceptual aliasing, 2) adaptable perception in adverse conditions by combined deep learning and probabilistic generative methods, 3) scalable perception as the number of objects grows and the structure of objects becomes more complex (e.g. objects in dense clutter). Towards realizing robust perception, our objective is to ground raw sensor observations into scene states while dealing with uncertainty from sensor measurements and actuator control . Scene states are represented as scene graphs, where scene graphs denote parameterized axiomatic statements that assert relationships between objects and their poses. To deal with the uncertainty, we present a pure generative approach, Axiomatic Scene Estimation (AxScEs). AxScEs estimates a probabilistic distribution across plausible scene graph hypotheses describing the configuration of objects. By maintaining a diverse set of possible states, the proposed approach demonstrates the robustness to the local minimum in the scene graph state space and effectiveness for manipulation-quality perception based on edit distance on scene graphs. To scale up to more unstructured scenarios and be adaptable to adversarial scenarios, we present Sequential Scene Understanding and Manipulation (SUM), which estimates the scene as a collection of objects in cluttered environments. SUM is a two-stage method that leverages the accuracy and efficiency from convolutional neural networks (CNNs) with probabilistic inference methods. Despite the strength from CNNs, they are opaque in understanding how the decisions are made and fragile for generalizing beyond overfit training samples in adverse conditions (e.g., changes in illumination). The probabilistic generative method complements these weaknesses and provides an avenue for adaptable perception. To scale up to densely cluttered environments where objects are physically touching with severe occlusions, we present GeoFusion, which fuses noisy observations from multiple frames by exploring geometric consistency at object level. Geometric consistency characterizes geometric compatibility between objects and geometric similarity between observations and objects. It reasons about geometry at the object-level, offering a fast and reliable way to be robust to semantic perceptual aliasing. The proposed approach demonstrates greater robustness and accuracy than the state-of-the-art pose estimation approach.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163060/1/zsui_1.pd

    Extraction automatique d'indices géométriques pour la préhension d'outils en ergonomie virtuelle

    Get PDF
    DELMIA est une brand de Dassault Systèmes spécialisé dans la simulation des procédés industriels. Ce module permet notamment la modélisation de tâches de travail dans des environnements manufacturiers simulés en 3D afin d’en analyser l’ergonomie. Cependant, la manipulation du mannequin virtuel se fait manuellement par des utilisateurs experts du domaine. Afin de démocratiser l’accès à l’ergonomie virtuelle, Dassault Systèmes a lancé un programme visant à positionner automatiquement le mannequin au sein de la maquette virtuelle à l’aide d’un nouveau moteur de positionnement nommé « Smart Posturing Engine (SPE) ». Le placement automatique des mains sur des outils constitue un des enjeux de ce projet. L’objectif général poursuivi dans ce mémoire consiste à proposer une méthode d’extraction automatique d’indices de préhension, servant de guide pour la saisie des outils, à partir de leurs modèles géométriques tridimensionnels. Cette méthode est basée sur l’affordance naturelle des outils disponibles de manière usuelle dans un environnement manufacturier. La méthode empirique présentée dans cette étude s’intéresse donc aux outils usuels tenus à une seule main. La méthode suppose que l’appartenance à une famille (maillets, pinces, etc.) de l’outil à analyser est initialement connue, ce qui permet de présumer de l’affordance de la géométrie à analyser. La méthode proposée comporte plusieurs étapes. Dans un premier temps, un balayage est mené sur la géométrie 3D de l’outil afin d’en extraire une série de sections. Des propriétés sont alors extraites pour chaque section de manière à reconstruire un modèle d’étude simplifié. Basé sur les variations des propriétés, l’outil est segmenté successivement en tronçons, segments et régions. Des indices de préhension sont finalement extraits des régions identifiées, y compris la tête de l'outil qui fournit une direction de travail liée à la tâche, de même que le manche ou la gâchette, le cas échéant. Ces indices de préhension sont finalement transmis au SPE afin de générer des préhensions orientées tâches. La solution proposée a été testée sur une cinquantaine d’outils tenus à une main appartenant aux familles des maillets, tournevis, pinces, visseuses droites et visseuses pistolets. Les modèles 3D des outils ont été récupérés du site en ligne « Part Supply » de Dassault Systèmes. La méthode proposée devrait être aisément transposable à d’autres familles d’outils
    corecore