139 research outputs found

    Bayesian Nonparametric Learning of Cloth Models for Real-time State Estimation

    Get PDF
    Robotic solutions to clothing assistance can significantly improve quality of life for the elderly and disabled. Real-time estimation of the human-cloth relationship is crucial for efficient learning of motor skills for robotic clothing assistance. The major challenge involved is cloth-state estimation due to inherent nonrigidity and occlusion. In this study, we present a novel framework for real-time estimation of the cloth state using a low-cost depth sensor, making it suitable for a feasible social implementation. The framework relies on the hypothesis that clothing articles are constrained to a low-dimensional latent manifold during clothing tasks. We propose the use of manifold relevance determination (MRD) to learn an offline cloth model that can be used to perform informed cloth-state estimation in real time. The cloth model is trained using observations from a motion capture system and depth sensor. MRD provides a principled probabilistic framework for inferring the accurate motion-capture state when only the noisy depth sensor feature state is available in real time. The experimental results demonstrate that our framework is capable of learning consistent task-specific latent features using few data samples and has the ability to generalize to unseen environmental settings. We further present several factors that affect the predictive performance of the learned cloth-state model

    Visual-tactile learning of garment unfolding for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to support disabled and elderly people in daily dressing activities. An intermediate stage of dressing is to manipulate the garment from a crumpled initial state to an unfolded configuration that facilitates robust dressing. Applying quasi-static grasping actions with vision feedback on garment unfolding usually suffers from occluded grasping points. In this work, we propose a dynamic manipulation strategy: tracing the garment edge until the hidden corner is revealed. We introduce a model-based approach, where a deep visual-tactile predictive model iteratively learns to perform servoing from raw sensor data. The predictive model is formalized as Conditional Variational Autoencoder with contrastive optimization, which jointly learns underlying visual-tactile latent representations, a latent garment dynamics model, and future predictions of garment states. Two cost functions are explored: the visual cost, defined by garment corner positions, guarantees the gripper to move towards the corner, while the tactile cost, defined by garment edge poses, prevents the garment from falling from the gripper. The experimental results demonstrate the improvement of our contrastive visual-tactile model predictive control over single sensing modality and baseline model learning techniques. The proposed method enables a robot to unfold back-opening hospital gowns and perform upper-body dressing

    Personalized robot assistant for support in dressing

    Get PDF
    Robot-assisted dressing is performed in close physical interaction with users who may have a wide range of physical characteristics and abilities. Design of user adaptive and personalized robots in this context is still indicating limited, or no consideration, of specific user-related issues. This paper describes the development of a multi-modal robotic system for a specific dressing scenario - putting on a shoe, where users’ personalized inputs contribute to a much improved task success rate. We have developed: 1) user tracking, gesture recognition andposturerecognitionalgorithmsrelyingonimagesprovidedby a depth camera; 2) a shoe recognition algorithm from RGB and depthimages;3)speechrecognitionandtext-to-speechalgorithms implemented to allow verbal interaction between the robot and user. The interaction is further enhanced by calibrated recognition of the users’ pointing gestures and adjusted robot’s shoe delivery position. A series of shoe fitting experiments have been performed on two groups of users, with and without previous robot personalization, to assess how it affects the interaction performance. Our results show that the shoe fitting task with the personalized robot is completed in shorter time, with a smaller number of user commands and reduced workload

    Personalized Robot-assisted Dressing using User Modeling in Latent Spaces

    No full text
    Robots have the potential to provide tremendous support to disabled and elderly people in their everyday tasks, such as dressing. Many recent studies on robotic dressing assistance usually view dressing as a trajectory planning problem. However, the user movements during the dressing process are rarely taken into account, which often leads to the failures of the planned trajectory and may put the user at risk. The main difficulty of taking user movements into account is caused by severe occlusions created by the robot, the user, and the clothes during the dressing process, which prevent vision sensors from accurately detecting the postures of the user in real time. In this paper, we address this problem by introducing an approach that allows the robot to automatically adapt its motion according to the force applied on the robot's gripper caused by user movements. There are two main contributions introduced in this paper: 1) the use of a hierarchical multi-task control strategy to automatically adapt the robot motion and minimize the force applied between the user and the robot caused by user movements; 2) the online update of the dressing trajectory based on the user movement limitations modeled with the Gaussian Process Latent Variable Model in a latent space, and the density information extracted from such latent space. The combination of these two contributions leads to a personalized dressing assistance that can cope with unpredicted user movements during the dressing while constantly minimizing the force that the robot may apply on the user. The experimental results demonstrate that the proposed method allows the Baxter humanoid robot to provide personalized dressing assistance for human users with simulated upper-body impairments

    Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects

    Get PDF
    Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe

    Robotic perception and manipulation of garments

    Get PDF
    This thesis introduces an effective robotic garment flattening pipeline and robotic perception paradigms for predicting garments’ geometric (shape) and physics properties. Robotic garment manipulation is a popular and challenging task in robotic research. Due to the high dimensionality of garments, object states of garments are infinite. Also, garments deform irregularly during manipulations, which makes predicting their deformations difficult. However, robotic garment manipulation is an essential topic in robotic research. Robotic laundry and household sorting play a vital role in an ageing society, and automated manufacturing requires robots to be able to grasp different mechanical components, some of which are deformable objects. Also, robot-aided garment dressing is essential for the community with disabilities. Therefore, designing and implementing effective robotic garment manipulation pipelines are necessary but challenging. This thesis mainly focuses on designing an effective robotic garment flattening pipeline. Therefore, this thesis is divided into two main parts: robotic perception and robotic manipulation. Below is a summary of the research in this PhD thesis: • Robotic perception provides prior knowledge on garment attributes (geometrical (shape) and physics properties) that facilitates robotic garment flattening. Continuous perception paradigms are introduced for predicting shapes and visually perceived garments weights. • A reality-simulation knowledge transferring paradigm for predicting the physics properties of real garments and fabrics has been proposed in this thesis. • The second part of this thesis is robotic manipulation. This thesis suggests learning the known configurations of garments with prior knowledge of garments’ geometric (shape) properties and selecting pre-designed manipulation strategies to flatten garments. The robotic manipulation part takes advantage of the geometric (shape) properties learned from the robotic perception part to recognise the known configurations of garments, demonstrating the importance of robotic perception in robotic manipulation. The experiment results of this thesis revealed that: 1). A robot gains confidence in prediction (shapes and visually perceived weights of unseen garments) from continuously perceiving video frames of unseen garments being grasped, where high accuracies on predictions (93% for shapes and 98.5 % for visually perceived weights) are obtained; 2). Predicting the physics properties of real garments and fabrics can be realised by learning physics similarities between simulated fabrics. The approach in this thesis outperforms SOTA (34 % improvement on real fabrics and 68.1 % improvement for real garments); 3). Compared with state-of-the-art robotic garment flattening, this thesis enables the flattening of garments of various shapes (five shapes) and fast and effective manipulations. Therefore, this thesis advanced SOTA of robotic perception and manipulation (flattening) of garments

    Personalized Assistance for Dressing Users

    Full text link
    Abstract. In this paper, we present an approach for a robot to provide personalized assistance for dressing a user. In particular, given a dressing task, our approach finds a solution involving manipulator motions and also user repositioning requests. Specifically, the solution allows the robot and user to take turns moving in the same space and is cognizant of the user’s limitations. To accomplish this, a vision module monitors the human’s motion, determines if he is following the repositioning requests, and infers mobility limitations when he cannot. The learned constraints are used during future dressing episodes to personalize the repositioning requests. Our contributions include a turn-taking approach to human-robot coordination for the dressing problem and a vision module capable of learning user limitations. After presenting the technical details of our approach, we provide an evaluation with a Baxter manipulator

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces

    User posture recognition for robot-assisted shoe dressing task

    Get PDF
    Grau en Enginyeria en Tecnologies Industrials. Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC. Universitat Politècnica de Catalunya; Escola Tècnica Superior d’Enginyeria Industrial de Barcelona (ETSEIB).Assistive robotics is a fast developing field, where a lot of research effort is invested towards the applications in healthcare domain. So far, the number of commercially available robots is low, and one of the reasons is robots’ limited ability to interact with users in safe and natural, human-like manner. This work focuses on development of a robot dressing assistant, more specifically its ability to track the user and recognize his/her intention to be dressed. The work is performed under the framework of the I-DRESS project, which aims to develop a robot able to provide proactive assistance with dressing to users with reduced mobility. The proposed system consists of a Barrett WAM robot manipulator and a Microsoft XBOX ONE Kinect Sensor V2.0 Camera Sensor (popularly known as Kinect 2, and will be denominated as such in the rest of this document), which provides user tracking from depth images. The integration of hardware and algorithms was performed in Robot Operating System (ROS). All developments and experiments were done in the laboratory of the Perception and Manipulation Group, at the Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC.Peer Reviewe
    • …
    corecore