1,669 research outputs found

    Bimanual Interaction with Clothes. Topology, Geometry, and Policy Representations in Robots

    Get PDF
    Twardon L. Bimanual Interaction with Clothes. Topology, Geometry, and Policy Representations in Robots. Bielefeld: Universität Bielefeld; 2019.If anthropomorphic robots are to assist people with activities of daily living, they must be able to handle all kinds of everyday objects, including highly deformable ones such as garments. The present thesis begins with a detailed problem analysis of robotic interaction with and perception of clothes. We show that handling items of clothing is very challenging due to their complex dynamics and the vast number of degrees of freedom. As a result of our analysis, we obtain a topological, geometric, and functional description of garments that supports the development of reduced object and task representations. One of the key findings is that the boundary components, which typically correspond with the openings, characterize garments well, both in terms of their topology and their inherent purpose, namely dressing. We present a polygon-based and an interactive method for identifying boundary components using RGB-D vision with application to grasping. Moreover, we propose Active Boundary Component Models (ABCMs), a constraint-based framework for tracking garment openings with point clouds. It is often difficult to maintain an accurate representation of the objects involved in contact-rich interaction tasks such as dressing assistance. Therefore, our policy optimization approach to putting a knit cap on a styrofoam head avoids modeling the details of the garment and its deformations. The experimental results suggest that a heuristic performance measure that takes into account the amount of contact established between the two objects is suitable for the task

    Integrated visual perception architecture for robotic clothes perception and manipulation

    Get PDF
    This thesis proposes a generic visual perception architecture for robotic clothes perception and manipulation. This proposed architecture is fully integrated with a stereo vision system and a dual-arm robot and is able to perform a number of autonomous laundering tasks. Clothes perception and manipulation is a novel research topic in robotics and has experienced rapid development in recent years. Compared to the task of perceiving and manipulating rigid objects, clothes perception and manipulation poses a greater challenge. This can be attributed to two reasons: firstly, deformable clothing requires precise (high-acuity) visual perception and dexterous manipulation; secondly, as clothing approximates a non-rigid 2-manifold in 3-space, that can adopt a quasi-infinite configuration space, the potential variability in the appearance of clothing items makes them difficult to understand, identify uniquely, and interact with by machine. From an applications perspective, and as part of EU CloPeMa project, the integrated visual perception architecture refines a pre-existing clothing manipulation pipeline by completing pre-wash clothes (category) sorting (using single-shot or interactive perception for garment categorisation and manipulation) and post-wash dual-arm flattening. To the best of the author’s knowledge, as investigated in this thesis, the autonomous clothing perception and manipulation solutions presented here were first proposed and reported by the author. All of the reported robot demonstrations in this work follow a perception-manipulation method- ology where visual and tactile feedback (in the form of surface wrinkledness captured by the high accuracy depth sensor i.e. CloPeMa stereo head or the predictive confidence modelled by Gaussian Processing) serve as the halting criteria in the flattening and sorting tasks, respectively. From scientific perspective, the proposed visual perception architecture addresses the above challenges by parsing and grouping 3D clothing configurations hierarchically from low-level curvatures, through mid-level surface shape representations (providing topological descriptions and 3D texture representations), to high-level semantic structures and statistical descriptions. A range of visual features such as Shape Index, Surface Topologies Analysis and Local Binary Patterns have been adapted within this work to parse clothing surfaces and textures and several novel features have been devised, including B-Spline Patches with Locality-Constrained Linear coding, and Topology Spatial Distance to describe and quantify generic landmarks (wrinkles and folds). The essence of this proposed architecture comprises 3D generic surface parsing and interpretation, which is critical to underpinning a number of laundering tasks and has the potential to be extended to other rigid and non-rigid object perception and manipulation tasks. The experimental results presented in this thesis demonstrate that: firstly, the proposed grasp- ing approach achieves on-average 84.7% accuracy; secondly, the proposed flattening approach is able to flatten towels, t-shirts and pants (shorts) within 9 iterations on-average; thirdly, the proposed clothes recognition pipeline can recognise clothes categories from highly wrinkled configurations and advances the state-of-the-art by 36% in terms of classification accuracy, achieving an 83.2% true-positive classification rate when discriminating between five categories of clothes; finally the Gaussian Process based interactive perception approach exhibits a substantial improvement over single-shot perception. Accordingly, this thesis has advanced the state-of-the-art of robot clothes perception and manipulation

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Development of a learning from demonstration environment using ZED 2i and HTC Vive Pro

    Get PDF
    Being able to teach complex capabilities, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled using learning from demonstration datasets. The few garment folding datasets available nowadays to the robotics research community are either gathered from human demonstrations or generated through simulation. The former have the huge problem of perceiving human action and transferring it to the dynamic control of the robot, while the latter requires coding human motion into the simulator in open loop, resulting in far-from-realistic movements. In this thesis, a novel virtual reality (VR) framework is proposed, based on Unity’s 3D platform and the use of HTC Vive Pro system, ZED mini, and ZED 2i cameras, and Leap motion’s hand-tracking module. The framework is capable of detecting and tracking objects, animals, and human bodies in a 3D environment. Moreover, the framework is also capable of simulating very realistic garments while allowing users to interact with them, in real-time, either through handheld controllers or the user’s real hands. By doing so, and thanks to the immersive experience, the framework gets rid of the gap between the human and robot perception-action loop, while simplifying data capture and resulting in more realistic samples. Finally, using the developed framework, a novel garment manipulation dataset will be recorded, containing samples with data and videos of nineteen different types of manipulation which aim to help tasks related to robot learning by demonstrationObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructur

    Motion and emotion estimation for robotic autism intervention.

    Get PDF
    Robots have recently emerged as a novel approach to treating autism spectrum disorder (ASD). A robot can be programmed to interact with children with ASD in order to reinforce positive social skills in a non-threatening environment. In prior work, robots were employed in interaction sessions with ASD children, but their sensory and learning abilities were limited, while a human therapist was heavily involved in “puppeteering” the robot. The objective of this work is to create the next-generation autism robot that includes several new interactive and decision-making capabilities that are not found in prior technology. Two of the main features that this robot would need to have is the ability to quantitatively estimate the patient’s motion performance and to correctly classify their emotions. This would allow for the potential diagnosis of autism and the ability to help autistic patients practice their skills. Therefore, in this thesis, we engineered components for a human-robot interaction system and confirmed them in experiments with the robots Baxter and Zeno, the sensors Empatica E4 and Kinect, and, finally, the open-source pose estimation software OpenPose. The Empatica E4 wristband is a wearable device that collects physiological measurements in real time from a test subject. Measurements were collected from ASD patients during human-robot interaction activities. Using this data and labels of attentiveness from a trained coder, a classifier was developed that provides a prediction of the patient’s level of engagement. The classifier outputs this prediction to a robot or supervising adult, allowing for decisions during intervention activities to keep the attention of the patient with autism. The CMU Perceptual Computing Lab’s OpenPose software package enables body, face, and hand tracking using an RGB camera (e.g., web camera) or an RGB-D camera (e.g., Microsoft Kinect). Integrating OpenPose with a robot allows the robot to collect information on user motion intent and perform motion imitation. In this work, we developed such a teleoperation interface with the Baxter robot. Finally, a novel algorithm, called Segment-based Online Dynamic Time Warping (SoDTW), and metric are proposed to help in the diagnosis of ASD. Social Robot Zeno, a childlike robot developed by Hanson Robotics, was used to test this algorithm and metric. Using the proposed algorithm, it is possible to classify a subject’s motion into different speeds or to use the resulting SoDTW score to evaluate the subject’s abilities

    DEVELOPMENT OF A COMPUTER SYSTEM FOR IDENTITY AUTHENTICATION USING ARTIFICIAL NEURAL NETWORKS

    Get PDF

    Research and Technology

    Get PDF
    Johnson Space Center (JSC) accomplishments in new and advanced concepts during 1989 are highlighted. This year, reports are grouped in sections, Medical Science, Solar System Sciences, Space Transportation Technology, and Space Systems Technology. Summary sections describing the role of JSC in each program are followed by descriptions of significant tasks. Descriptions are suitable for external consumption, free of technical jargon, and illustrated to increase ease of comprehension

    Sensors for Robotic Hands: A Survey of State of the Art

    Get PDF
    Recent decades have seen significant progress in the field of artificial hands. Most of the surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands

    Learning to Put On a Knit Cap in a Head-Centric Policy Space

    Get PDF
    Twardon L, Ritter H. Learning to Put On a Knit Cap in a Head-Centric Policy Space. IEEE ROBOTICS AND AUTOMATION LETTERS. 2018;3(2):764-771.Robotic manipulation of such highly deformable objects as clothes is a challenging problem. Robot-assisted dressing adds even more complexity as the garment motions must be aligned with a human body under conditions of strong and variable occlusion. As a step toward solutions for the general task, we consider the example of a dual-arm robot with attached anthropomorphic hands that learns to put a knit cap on a styrofoam head. Our approach avoids modeling the details of the garment and its deformations. Instead, we demonstrate that a head-centric policy parameterization, combined with a suitable objective function for determining the right amount of contact between the cap and the head, enables a direct policy search algorithm to find successful trajectories for this task. We also show how a toy problem that mirrors some of the task constraints can be used to efficiently structure hyperparameter search. Additionally, we suggest a point cloud based algorithm for modeling the head as an ellipsoid which is required for defining the policy space
    corecore