25 research outputs found

    Robotic Caregivers -- Simulation and Capacitive Servoing for Physical Human-Robot Interaction

    Get PDF
    Physical human-robot interaction and robotic assistance presents an opportunity to benefit the lives of many people, including the millions of older adults and people with physical disabilities, who have difficulty performing activities of daily living (ADLs) on their own. Robotic caregiving for activities of daily living could increase the independence of people with disabilities, improve quality of life, and help address global societal issues, such as aging populations, high healthcare costs, and shortages of healthcare workers. Yet, robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and complexities of modeling deformable materials (e.g. clothes). We address these challenges through techniques that span the intersection of machine learning, physics simulation, sensing, and physical human-robot interaction. Haptic Perspective-taking: We first demonstrate that by enabling a robot to predict how its future actions will physically affect a person (haptic perspective-taking), robots can provide safer assistance, especially within the context of robot-assisted dressing and manipulating deformable clothes. We train a recurrent model consisting of both a temporal estimator and predictor that allows a robot to predict the forces a garment is applying onto a person using haptic measurements from the robot's end effector. By combining this predictor with model predictive control (MPC), we observe emergent behaviors that result in the robot navigating a garment up a person's entire arm. Capacitive Sensing for Tracking Human Pose: Towards the goal of robots performing robust and intelligent physical interactions with people, it is crucial that robots are able to accurately sense the human body, follow trajectories around the body, and track human motion. We have introduced a capacitive servoing control scheme that allows a robot to sense and navigate around human limbs during close physical interactions. Capacitive servoing leverages temporal measurements from a capacitive sensor mounted on a robot's end effector to estimate the relative pose of a nearby human limb. Capacitive servoing then uses these human pose estimates within a feedback control loop in order to maneuver the robot's end effector around the surface of a human limb. Through studies with human participants, we have demonstrated that these sensors can enable a robot to track human motion in real time while providing assistance with dressing and bathing. We have also shown how these sensors can benefit a robot providing dressing assistance to real people with physical disabilities. Physics Simulation for Assistive Robotics: While robotic caregivers may present an opportunity to improve the quality of life for people who require daily assistance, conducting this type of research presents several challenges, including high costs, slow data collection, and risks of physical interaction between people and robots. We have recently introduced Assistive Gym, the first open source physics-based simulation framework for modeling physical human-robot interaction and robotic assistance. We demonstrate how physics simulation can open up entirely new research directions and opportunities within physical human-robot interaction. This includes training versatile assistive robots, developing control algorithms towards common sense reasoning, constructing baselines and benchmarks for robotic caregiving, and investigating generalization of physical human-robot interaction from a number of angles, including human motion, preferences, and variation in human body shape and impairments. Finally, we show how virtual reality (VR) can help bridge the reality gap by bringing real people into physics simulation to interact with and receive assistance from virtual robotic caregivers.Ph.D

    Towards Assistive Feeding with a General-Purpose Mobile Manipulator

    Get PDF
    General-purpose mobile manipulators have the potential to serve as a versatile form of assistive technology. However, their complexity creates challenges, including the risk of being too difficult to use. We present a proof-of-concept robotic system for assistive feeding that consists of a Willow Garage PR2, a high-level web-based interface, and specialized autonomous behaviors for scooping and feeding yogurt. As a step towards use by people with disabilities, we evaluated our system with 5 able-bodied participants. All 5 successfully ate yogurt using the system and reported high rates of success for the system's autonomous behaviors. Also, Henry Evans, a person with severe quadriplegia, operated the system remotely to feed an able-bodied person. In general, people who operated the system reported that it was easy to use, including Henry. The feeding system also incorporates corrective actions designed to be triggered either autonomously or by the user. In an offline evaluation using data collected with the feeding system, a new version of our multimodal anomaly detection system outperformed prior versions.Comment: This short 4-page paper was accepted and presented as a poster on May. 16, 2016 in ICRA 2016 workshop on 'Human-Robot Interfaces for Enhanced Physical Interactions' organized by Arash Ajoudani, Barkan Ugurlu, Panagiotis Artemiadis, Jun Morimoto. It was peer reviewed by one reviewe

    Assistive VR Gym: Interactions with Real People to Improve Virtual Assistive Robots

    Full text link
    Versatile robotic caregivers could benefit millions of people worldwide, including older adults and people with disabilities. Recent work has explored how robotic caregivers can learn to interact with people through physics simulations, yet transferring what has been learned to real robots remains challenging. Virtual reality (VR) has the potential to help bridge the gap between simulations and the real world. We present Assistive VR Gym (AVR Gym), which enables real people to interact with virtual assistive robots. We also provide evidence that AVR Gym can help researchers improve the performance of simulation-trained assistive robots with real people. Prior to AVR Gym, we trained robot control policies (Original Policies) solely in simulation for four robotic caregiving tasks (robot-assisted feeding, drinking, itch scratching, and bed bathing) with two simulated robots (PR2 from Willow Garage and Jaco from Kinova). With AVR Gym, we developed Revised Policies based on insights gained from testing the Original policies with real people. Through a formal study with eight participants in AVR Gym, we found that the Original policies performed poorly, the Revised policies performed significantly better, and that improvements to the biomechanical models used to train the Revised policies resulted in simulated people that better match real participants. Notably, participants significantly disagreed that the Original policies were successful at assistance, but significantly agreed that the Revised policies were successful at assistance. Overall, our results suggest that VR can be used to improve the performance of simulation-trained control policies with real people without putting people at risk, thereby serving as a valuable stepping stone to real robotic assistance.Comment: IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), 8 pages, 8 figures, 2 table

    Deep Haptic Model Predictive Control for Robot-Assisted Dressing

    Full text link
    Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.Comment: 8 pages, 12 figures, 1 table, 2018 IEEE International Conference on Robotics and Automation (ICRA

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm

    Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing

    Get PDF
    Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot's end effector. A neural network model estimates the position of the closest point on a person's limb and the orientation of the limb's central axis relative to the sensor's frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living-dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants' right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants' limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation Robotics 201

    Robust Body Exposure (RoBE): A Graph-based Dynamics Modeling Approach to Manipulating Blankets over People

    Full text link
    Robotic caregivers could potentially improve the quality of life of many who require physical assistance. However, in order to assist individuals who are lying in bed, robots must be capable of dealing with a significant obstacle: the blanket or sheet that will almost always cover the person's body. We propose a method for targeted bedding manipulation over people lying supine in bed where we first learn a model of the cloth's dynamics. Then, we optimize over this model to uncover a given target limb using information about human body shape and pose that only needs to be provided at run-time. We show how this approach enables greater robustness to variation relative to geometric and reinforcement learning baselines via a number of generalization evaluations in simulation and in the real world. We further evaluate our approach in a human study with 12 participants where we demonstrate that a mobile manipulator can adapt to real variation in human body shape, size, pose, and blanket configuration to uncover target body parts without exposing the rest of the body. Source code and supplementary materials are available online.Comment: 8 pages, 9 figures, 2 table
    corecore