168 research outputs found

    Robotic cloth manipulation for clothing assistance task using Dynamic Movement Primitives

    Get PDF
    The need of robotic clothing assistance in the field of assistive robotics is growing, as it is one of the most basic and essential assistance activities in daily life of elderly and disabled people. In this study we are investigating the applicability of using Dynamic Movement Primitives (DMP) as a task parameterization model for performing clothing assistance task. Robotic cloth manipulation task deals with putting a clothing article on both the arms. Robot trajectory varies significantly for various postures and also there can be various failure scenarios while doing cooperative manipulation with non-rigid and highly deformable clothing article. We have performed experiments on soft mannequin instead of human. Result shows that DMPs are able to generalize movement trajectory for modified posture.3rd International Conference of Robotics Society of India (AIR \u2717: Advances in Robotics), June 28 - July 2, 2017, New Delhi, Indi

    Assistive robotics: research challenges and ethics education initiatives

    Get PDF
    Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials. Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations. In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft

    Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects

    Get PDF
    Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe

    模倣学習を用いた両腕ロボット着衣介助システムのデザインと開発

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population and fallen fertility rates. With the aging population, the number of elderly who need support for their Activities of Daily Living (ADL) such as dressing, is growing. The use of caregivers is universal for the dressing task due to the unavailability of any effective assistive technology. Unfortunately, across the globe, many nations are suffering from a severe shortage of caregivers. Hence, the demand for service robots to assist with the dressing task is increasing rapidly. Robotic Clothing Assistance is a challenging task. The robot has to deal with the following two complex tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation, and (b) safe human-robot interaction while assisting a human whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this thesis, a framework for Robotic Clothing Assistance by imitation learning from a human demonstration to a compliant dual-arm robot is proposed. In this framework, the dressing task is divided into the following three phases, (a) reaching phase, (b) arm dressing phase, and (c) body dressing phase. The arm dressing phase is treated as a global trajectory modification and implemented by applying the Dynamic Movement Primitives (DMP). The body dressing phase is represented as a local trajectory modification and executed by employing the Bayesian Gaussian Process Latent Variable Model (BGPLVM). It is demonstrated that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless T-shirt dressing task. Furthermore, in this thesis, various limitations and improvements to the framework are discussed. These improvements include the followings (a) evaluation of Robotic Clothing Assistance, (b) automated wheelchair movement, and (c) incremental learning to perform Robotic Clothing Assistance. Evaluation is necessary for our framework. To make it accessible in care facilities, systematic assessment of the performance, and the devices’ effects on the care receivers and caregivers is required. Therefore, a robotic simulator that mimicks human postures is used as a subject to evaluate the dressing task. The proposed framework involves a wheeled chair’s manually coordinated movement, which is difficult to perform for the elderly as it requires pushing the chair by himself. To this end, using an electric wheelchair, an approach for wheelchair and robot collaboration is presented. Finally, to incorporate different human body dimensions, Robotic Clothing Assistance is formulated as an incremental imitation learning problem. The proposed formulation enables learning and adjusting the behavior incrementally whenever a new demonstration is performed. When found inappropriate, the planned trajectory is modified through physical Human-Robot Interaction (HRI) during the execution. This research work is exhibited to the public at various events such as the International Robot Exhibition (iREX) 2017 at Tokyo (Japan), the West Japan General Exhibition Center Annex 2018 at Kokura (Japan), and iREX 2019 at Tokyo (Japan).九州工業大学博士学位論文 学位記番号:生工博甲第384号 学位授与年月日:令和2年9月25日1 Introduction|2 Related Work|3 Imitation Learning|4 Experimental System|5 Proposed Framework|6 Whole-Body Robotic Simulator|7 Electric Wheelchair-Robot Collaboration|8 Incremental Imitation Learning|9 Conclusion九州工業大学令和2年

    A framework for robotic clothing assistance by imitation learning

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population, fallen fertility rates and a shortage of caregivers. Hence, the demand for service robots to assist with dressing which is an essential Activity of Daily Living (ADL) is increasing rapidly. Robotic Clothing Assistance is a challenging task since the robot has to deal with two demanding tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation and (b) safe human–robot interaction while assisting humans whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this paper, we propose a framework for robotic clothing assistance by imitation learning from a human demonstration to a compliant dual-arm robot. In this framework, we divide the dressing task into three phases, i.e. reaching phase, arm dressing phase, and body dressing phase. We model the arm dressing phase as a global trajectory modification using Dynamic Movement Primitives (DMP), while we model the body dressing phase toward a local trajectory modification applying Bayesian Gaussian Process Latent Variable Model (BGPLVM). We show that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless shirt dressing task. We also present participants feedback on public demonstration at the International Robot Exhibition (iREX) 2017. To our knowledge, this is the first work performing a full dressing of a sleeveless shirt on a human subject with a humanoid robot

    Neural Dynamic Movement Primitives -- a survey

    Full text link
    One of the most important challenges in robotics is producing accurate trajectories and controlling their dynamic parameters so that the robots can perform different tasks. The ability to provide such motion control is closely related to how such movements are encoded. Advances on deep learning have had a strong repercussion in the development of novel approaches for Dynamic Movement Primitives. In this work, we survey scientific literature related to Neural Dynamic Movement Primitives, to complement existing surveys on Dynamic Movement Primitives

    Learning garment manipulation policies toward robot-assisted dressing.

    Get PDF
    Assistive robots have the potential to support people with disabilities in a variety of activities of daily living, such as dressing. People who have completely lost their upper limb movement functionality may benefit from robot-assisted dressing, which involves complex deformable garment manipulation. Here, we report a dressing pipeline intended for these people and experimentally validate it on a medical training manikin. The pipeline is composed of the robot grasping a hospital gown hung on a rail, fully unfolding the gown, navigating around a bed, and lifting up the user's arms in sequence to finally dress the user. To automate this pipeline, we address two fundamental challenges: first, learning manipulation policies to bring the garment from an uncertain state into a configuration that facilitates robust dressing; second, transferring the deformable object manipulation policies learned in simulation to real world to leverage cost-effective data generation. We tackle the first challenge by proposing an active pre-grasp manipulation approach that learns to isolate the garment grasping area before grasping. The approach combines prehensile and nonprehensile actions and thus alleviates grasping-only behavioral uncertainties. For the second challenge, we bridge the sim-to-real gap of deformable object policy transfer by approximating the simulator to real-world garment physics. A contrastive neural network is introduced to compare pairs of real and simulated garment observations, measure their physical similarity, and account for simulator parameters inaccuracies. The proposed method enables a dual-arm robot to put back-opening hospital gowns onto a medical manikin with a success rate of more than 90%

    Robotic Fabric Flattening with Wrinkle Direction Detection

    Full text link
    Deformable Object Manipulation (DOM) is an important field of research as it contributes to practical tasks such as automatic cloth handling, cable routing, surgical operation, etc. Perception is considered one of the major challenges in DOM due to the complex dynamics and high degree of freedom of deformable objects. In this paper, we develop a novel image-processing algorithm based on Gabor filters to extract useful features from cloth, and based on this, devise a strategy for cloth flattening tasks. We evaluate the overall framework experimentally, and compare it with three human operators. The results show that our algorithm can determine the direction of wrinkles on the cloth accurately in the simulation as well as the real robot experiments. Besides, the robot executing the flattening tasks using the dewrinkling strategy given by our algorithm achieves satisfying performance compared to other baseline methods. The experiment video is available on https://sites.google.com/view/robotic-fabric-flattening/hom

    Controlled Gaussian Process Dynamical Models with Application to Robotic Cloth Manipulation

    Full text link
    Over the last years, robotic cloth manipulation has gained relevance within the research community. While significant advances have been made in robotic manipulation of rigid objects, the manipulation of non-rigid objects such as cloth garments is still a challenging problem. The uncertainty on how cloth behaves often requires the use of model-based approaches. However, cloth models have a very high dimensionality. Therefore, it is difficult to find a middle point between providing a manipulator with a dynamics model of cloth and working with a state space of tractable dimensionality. For this reason, most cloth manipulation approaches in literature perform static or quasi-static manipulation. In this paper, we propose a variation of Gaussian Process Dynamical Models (GPDMs) to model cloth dynamics in a low-dimensional manifold. GPDMs project a high-dimensional state space into a smaller dimension latent space which is capable of keeping the dynamic properties. Using such approach, we add control variables to the original formulation. In this way, it is possible to take into account the robot commands exerted on the cloth dynamics. We call this new version Controlled Gaussian Process Dynamical Model (C-GPDM). Moreover, we propose an alternative kernel representation for the model, characterized by a richer parameterization than the one employed in the majority of previous GPDM realizations. The modeling capacity of our proposal has been tested in a simulated scenario, where C-GPDM proved to be capable of generalizing over a considerably wide range of movements and correctly predicting the cloth oscillations generated by previously unseen sequences of control actions
    corecore