331 research outputs found

    Bayesian Nonparametric Learning of Cloth Models for Real-time State Estimation

    Get PDF
    Robotic solutions to clothing assistance can significantly improve quality of life for the elderly and disabled. Real-time estimation of the human-cloth relationship is crucial for efficient learning of motor skills for robotic clothing assistance. The major challenge involved is cloth-state estimation due to inherent nonrigidity and occlusion. In this study, we present a novel framework for real-time estimation of the cloth state using a low-cost depth sensor, making it suitable for a feasible social implementation. The framework relies on the hypothesis that clothing articles are constrained to a low-dimensional latent manifold during clothing tasks. We propose the use of manifold relevance determination (MRD) to learn an offline cloth model that can be used to perform informed cloth-state estimation in real time. The cloth model is trained using observations from a motion capture system and depth sensor. MRD provides a principled probabilistic framework for inferring the accurate motion-capture state when only the noisy depth sensor feature state is available in real time. The experimental results demonstrate that our framework is capable of learning consistent task-specific latent features using few data samples and has the ability to generalize to unseen environmental settings. We further present several factors that affect the predictive performance of the learned cloth-state model

    Data-efficient Learning of Robotic Clothing Assistance using Bayesian Gaussian Process Latent Variable Models

    Get PDF
    Motor-skill learning for complex robotic tasks is a challenging problem due to the high task variability. Robotic clothing assistance is one such challenging problem that can greatly improve the quality-of-life for the elderly and disabled. In this study, we propose a data-efficient representation to encode task-specific motor-skills of the robot using Bayesian nonparametric latent variable models. The effectivity of the proposed motor-skill representation is demonstrated in two ways: (1) through a real-time controller that can be used as a tool for learning from demonstration to impart novel skills to the robot and (2) by demonstrating that policy search reinforcement learning in such a task-specific latent space outperforms learning in the high-dimensional joint configuration space of the robot. We implement our proposed framework in a practical setting with a dual-arm robot performing clothing assistance tasks

    模倣学習を用いた両腕ロボット着衣介助システムのデザインと開発

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population and fallen fertility rates. With the aging population, the number of elderly who need support for their Activities of Daily Living (ADL) such as dressing, is growing. The use of caregivers is universal for the dressing task due to the unavailability of any effective assistive technology. Unfortunately, across the globe, many nations are suffering from a severe shortage of caregivers. Hence, the demand for service robots to assist with the dressing task is increasing rapidly. Robotic Clothing Assistance is a challenging task. The robot has to deal with the following two complex tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation, and (b) safe human-robot interaction while assisting a human whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this thesis, a framework for Robotic Clothing Assistance by imitation learning from a human demonstration to a compliant dual-arm robot is proposed. In this framework, the dressing task is divided into the following three phases, (a) reaching phase, (b) arm dressing phase, and (c) body dressing phase. The arm dressing phase is treated as a global trajectory modification and implemented by applying the Dynamic Movement Primitives (DMP). The body dressing phase is represented as a local trajectory modification and executed by employing the Bayesian Gaussian Process Latent Variable Model (BGPLVM). It is demonstrated that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless T-shirt dressing task. Furthermore, in this thesis, various limitations and improvements to the framework are discussed. These improvements include the followings (a) evaluation of Robotic Clothing Assistance, (b) automated wheelchair movement, and (c) incremental learning to perform Robotic Clothing Assistance. Evaluation is necessary for our framework. To make it accessible in care facilities, systematic assessment of the performance, and the devices’ effects on the care receivers and caregivers is required. Therefore, a robotic simulator that mimicks human postures is used as a subject to evaluate the dressing task. The proposed framework involves a wheeled chair’s manually coordinated movement, which is difficult to perform for the elderly as it requires pushing the chair by himself. To this end, using an electric wheelchair, an approach for wheelchair and robot collaboration is presented. Finally, to incorporate different human body dimensions, Robotic Clothing Assistance is formulated as an incremental imitation learning problem. The proposed formulation enables learning and adjusting the behavior incrementally whenever a new demonstration is performed. When found inappropriate, the planned trajectory is modified through physical Human-Robot Interaction (HRI) during the execution. This research work is exhibited to the public at various events such as the International Robot Exhibition (iREX) 2017 at Tokyo (Japan), the West Japan General Exhibition Center Annex 2018 at Kokura (Japan), and iREX 2019 at Tokyo (Japan).九州工業大学博士学位論文 学位記番号:生工博甲第384号 学位授与年月日:令和2年9月25日1 Introduction|2 Related Work|3 Imitation Learning|4 Experimental System|5 Proposed Framework|6 Whole-Body Robotic Simulator|7 Electric Wheelchair-Robot Collaboration|8 Incremental Imitation Learning|9 Conclusion九州工業大学令和2年

    A framework for robotic clothing assistance by imitation learning

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population, fallen fertility rates and a shortage of caregivers. Hence, the demand for service robots to assist with dressing which is an essential Activity of Daily Living (ADL) is increasing rapidly. Robotic Clothing Assistance is a challenging task since the robot has to deal with two demanding tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation and (b) safe human–robot interaction while assisting humans whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this paper, we propose a framework for robotic clothing assistance by imitation learning from a human demonstration to a compliant dual-arm robot. In this framework, we divide the dressing task into three phases, i.e. reaching phase, arm dressing phase, and body dressing phase. We model the arm dressing phase as a global trajectory modification using Dynamic Movement Primitives (DMP), while we model the body dressing phase toward a local trajectory modification applying Bayesian Gaussian Process Latent Variable Model (BGPLVM). We show that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless shirt dressing task. We also present participants feedback on public demonstration at the International Robot Exhibition (iREX) 2017. To our knowledge, this is the first work performing a full dressing of a sleeveless shirt on a human subject with a humanoid robot

    Controlled Gaussian Process Dynamical Models with Application to Robotic Cloth Manipulation

    Full text link
    Over the last years, robotic cloth manipulation has gained relevance within the research community. While significant advances have been made in robotic manipulation of rigid objects, the manipulation of non-rigid objects such as cloth garments is still a challenging problem. The uncertainty on how cloth behaves often requires the use of model-based approaches. However, cloth models have a very high dimensionality. Therefore, it is difficult to find a middle point between providing a manipulator with a dynamics model of cloth and working with a state space of tractable dimensionality. For this reason, most cloth manipulation approaches in literature perform static or quasi-static manipulation. In this paper, we propose a variation of Gaussian Process Dynamical Models (GPDMs) to model cloth dynamics in a low-dimensional manifold. GPDMs project a high-dimensional state space into a smaller dimension latent space which is capable of keeping the dynamic properties. Using such approach, we add control variables to the original formulation. In this way, it is possible to take into account the robot commands exerted on the cloth dynamics. We call this new version Controlled Gaussian Process Dynamical Model (C-GPDM). Moreover, we propose an alternative kernel representation for the model, characterized by a richer parameterization than the one employed in the majority of previous GPDM realizations. The modeling capacity of our proposal has been tested in a simulated scenario, where C-GPDM proved to be capable of generalizing over a considerably wide range of movements and correctly predicting the cloth oscillations generated by previously unseen sequences of control actions

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces

    Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects

    Get PDF
    Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe

    A topological extension of movement primitives for curvature modulation and sampling of robot motion

    Get PDF
    The version of record is available online at: https://doi.org/10.1007/s10514-021-09976-7This paper proposes to enrich robot motion data with trajectory curvature information. To do so,we use an approximate implementation of a topological feature named writhe, which measures the curling of a closed curve around itself, and its analog feature for two closed curves, namely the linking number. Despite these features have been established for closed curves, their definition allows for a discrete calculation that is well-defined for non-closed curves and can thus provide information about how much a robot trajectory is curling around a line in space. Such lines can be predefined by a user, observed by vision or, in our case, inferred as virtual lines in space around which the robot motion is curling. We use these topological features to augment the data of a trajectory encapsulated as a Movement Primitive (MP). We propose a method to determine how many virtual segments best characterize a trajectory and then find such segments. This results in a generative model that permits modulating curvature to generate new samples, while still staying within the dataset distribution and being able to adapt to contextual variables.This work has been carried out within the project CLOTHILDE (”CLOTH manIpulation Learning from DEmonstrations”) funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Advanced Grant agreement No 741930). Research at IRI is also supported by the Spanish State Research Agency through the Mar ́ıa de Maeztu Seal of Excellence to IRI MDM-2016-0656Peer ReviewedPostprint (author's final draft

    Embedded Object Detection and Mapping in Soft Materials Using Optical Tactile Sensing

    Full text link
    In this paper, we present a methodology that uses an optical tactile sensor for efficient tactile exploration of embedded objects within soft materials. The methodology consists of an exploration phase, where a probabilistic estimate of the location of the embedded objects is built using a Bayesian approach. The exploration phase is then followed by a mapping phase which exploits the probabilistic map to reconstruct the underlying topography of the workspace by sampling in more detail regions where there is expected to be embedded objects. To demonstrate the effectiveness of the method, we tested our approach on an experimental setup that consists of a series of quartz beads located underneath a polyethylene foam that prevents direct observation of the configuration and requires the use of tactile exploration to recover the location of the beads. We show the performance of our methodology using ten different configurations of the beads where the proposed approach is able to approximate the underlying configuration. We benchmark our results against a random sampling policy
    corecore