8 research outputs found

    Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing

    Get PDF
    Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot's end effector. A neural network model estimates the position of the closest point on a person's limb and the orientation of the limb's central axis relative to the sensor's frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living-dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants' right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants' limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation Robotics 201

    Assistive VR Gym: Interactions with Real People to Improve Virtual Assistive Robots

    Full text link
    Versatile robotic caregivers could benefit millions of people worldwide, including older adults and people with disabilities. Recent work has explored how robotic caregivers can learn to interact with people through physics simulations, yet transferring what has been learned to real robots remains challenging. Virtual reality (VR) has the potential to help bridge the gap between simulations and the real world. We present Assistive VR Gym (AVR Gym), which enables real people to interact with virtual assistive robots. We also provide evidence that AVR Gym can help researchers improve the performance of simulation-trained assistive robots with real people. Prior to AVR Gym, we trained robot control policies (Original Policies) solely in simulation for four robotic caregiving tasks (robot-assisted feeding, drinking, itch scratching, and bed bathing) with two simulated robots (PR2 from Willow Garage and Jaco from Kinova). With AVR Gym, we developed Revised Policies based on insights gained from testing the Original policies with real people. Through a formal study with eight participants in AVR Gym, we found that the Original policies performed poorly, the Revised policies performed significantly better, and that improvements to the biomechanical models used to train the Revised policies resulted in simulated people that better match real participants. Notably, participants significantly disagreed that the Original policies were successful at assistance, but significantly agreed that the Revised policies were successful at assistance. Overall, our results suggest that VR can be used to improve the performance of simulation-trained control policies with real people without putting people at risk, thereby serving as a valuable stepping stone to real robotic assistance.Comment: IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), 8 pages, 8 figures, 2 table

    Learning garment manipulation policies toward robot-assisted dressing.

    Get PDF
    Assistive robots have the potential to support people with disabilities in a variety of activities of daily living, such as dressing. People who have completely lost their upper limb movement functionality may benefit from robot-assisted dressing, which involves complex deformable garment manipulation. Here, we report a dressing pipeline intended for these people and experimentally validate it on a medical training manikin. The pipeline is composed of the robot grasping a hospital gown hung on a rail, fully unfolding the gown, navigating around a bed, and lifting up the user's arms in sequence to finally dress the user. To automate this pipeline, we address two fundamental challenges: first, learning manipulation policies to bring the garment from an uncertain state into a configuration that facilitates robust dressing; second, transferring the deformable object manipulation policies learned in simulation to real world to leverage cost-effective data generation. We tackle the first challenge by proposing an active pre-grasp manipulation approach that learns to isolate the garment grasping area before grasping. The approach combines prehensile and nonprehensile actions and thus alleviates grasping-only behavioral uncertainties. For the second challenge, we bridge the sim-to-real gap of deformable object policy transfer by approximating the simulator to real-world garment physics. A contrastive neural network is introduced to compare pairs of real and simulated garment observations, measure their physical similarity, and account for simulator parameters inaccuracies. The proposed method enables a dual-arm robot to put back-opening hospital gowns onto a medical manikin with a success rate of more than 90%

    Tracking Human Pose During Robot-Assisted Dressing Using Single-Axis Capacitive Proximity Sensing

    No full text

    模倣学習を用いた両腕ロボット着衣介助システムのデザインと開発

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population and fallen fertility rates. With the aging population, the number of elderly who need support for their Activities of Daily Living (ADL) such as dressing, is growing. The use of caregivers is universal for the dressing task due to the unavailability of any effective assistive technology. Unfortunately, across the globe, many nations are suffering from a severe shortage of caregivers. Hence, the demand for service robots to assist with the dressing task is increasing rapidly. Robotic Clothing Assistance is a challenging task. The robot has to deal with the following two complex tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation, and (b) safe human-robot interaction while assisting a human whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this thesis, a framework for Robotic Clothing Assistance by imitation learning from a human demonstration to a compliant dual-arm robot is proposed. In this framework, the dressing task is divided into the following three phases, (a) reaching phase, (b) arm dressing phase, and (c) body dressing phase. The arm dressing phase is treated as a global trajectory modification and implemented by applying the Dynamic Movement Primitives (DMP). The body dressing phase is represented as a local trajectory modification and executed by employing the Bayesian Gaussian Process Latent Variable Model (BGPLVM). It is demonstrated that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless T-shirt dressing task. Furthermore, in this thesis, various limitations and improvements to the framework are discussed. These improvements include the followings (a) evaluation of Robotic Clothing Assistance, (b) automated wheelchair movement, and (c) incremental learning to perform Robotic Clothing Assistance. Evaluation is necessary for our framework. To make it accessible in care facilities, systematic assessment of the performance, and the devices’ effects on the care receivers and caregivers is required. Therefore, a robotic simulator that mimicks human postures is used as a subject to evaluate the dressing task. The proposed framework involves a wheeled chair’s manually coordinated movement, which is difficult to perform for the elderly as it requires pushing the chair by himself. To this end, using an electric wheelchair, an approach for wheelchair and robot collaboration is presented. Finally, to incorporate different human body dimensions, Robotic Clothing Assistance is formulated as an incremental imitation learning problem. The proposed formulation enables learning and adjusting the behavior incrementally whenever a new demonstration is performed. When found inappropriate, the planned trajectory is modified through physical Human-Robot Interaction (HRI) during the execution. This research work is exhibited to the public at various events such as the International Robot Exhibition (iREX) 2017 at Tokyo (Japan), the West Japan General Exhibition Center Annex 2018 at Kokura (Japan), and iREX 2019 at Tokyo (Japan).九州工業大学博士学位論文 学位記番号:生工博甲第384号 学位授与年月日:令和2年9月25日1 Introduction|2 Related Work|3 Imitation Learning|4 Experimental System|5 Proposed Framework|6 Whole-Body Robotic Simulator|7 Electric Wheelchair-Robot Collaboration|8 Incremental Imitation Learning|9 Conclusion九州工業大学令和2年

    Robotic Caregivers -- Simulation and Capacitive Servoing for Physical Human-Robot Interaction

    Get PDF
    Physical human-robot interaction and robotic assistance presents an opportunity to benefit the lives of many people, including the millions of older adults and people with physical disabilities, who have difficulty performing activities of daily living (ADLs) on their own. Robotic caregiving for activities of daily living could increase the independence of people with disabilities, improve quality of life, and help address global societal issues, such as aging populations, high healthcare costs, and shortages of healthcare workers. Yet, robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and complexities of modeling deformable materials (e.g. clothes). We address these challenges through techniques that span the intersection of machine learning, physics simulation, sensing, and physical human-robot interaction. Haptic Perspective-taking: We first demonstrate that by enabling a robot to predict how its future actions will physically affect a person (haptic perspective-taking), robots can provide safer assistance, especially within the context of robot-assisted dressing and manipulating deformable clothes. We train a recurrent model consisting of both a temporal estimator and predictor that allows a robot to predict the forces a garment is applying onto a person using haptic measurements from the robot's end effector. By combining this predictor with model predictive control (MPC), we observe emergent behaviors that result in the robot navigating a garment up a person's entire arm. Capacitive Sensing for Tracking Human Pose: Towards the goal of robots performing robust and intelligent physical interactions with people, it is crucial that robots are able to accurately sense the human body, follow trajectories around the body, and track human motion. We have introduced a capacitive servoing control scheme that allows a robot to sense and navigate around human limbs during close physical interactions. Capacitive servoing leverages temporal measurements from a capacitive sensor mounted on a robot's end effector to estimate the relative pose of a nearby human limb. Capacitive servoing then uses these human pose estimates within a feedback control loop in order to maneuver the robot's end effector around the surface of a human limb. Through studies with human participants, we have demonstrated that these sensors can enable a robot to track human motion in real time while providing assistance with dressing and bathing. We have also shown how these sensors can benefit a robot providing dressing assistance to real people with physical disabilities. Physics Simulation for Assistive Robotics: While robotic caregivers may present an opportunity to improve the quality of life for people who require daily assistance, conducting this type of research presents several challenges, including high costs, slow data collection, and risks of physical interaction between people and robots. We have recently introduced Assistive Gym, the first open source physics-based simulation framework for modeling physical human-robot interaction and robotic assistance. We demonstrate how physics simulation can open up entirely new research directions and opportunities within physical human-robot interaction. This includes training versatile assistive robots, developing control algorithms towards common sense reasoning, constructing baselines and benchmarks for robotic caregiving, and investigating generalization of physical human-robot interaction from a number of angles, including human motion, preferences, and variation in human body shape and impairments. Finally, we show how virtual reality (VR) can help bridge the reality gap by bringing real people into physics simulation to interact with and receive assistance from virtual robotic caregivers.Ph.D
    corecore