24 research outputs found

    Deep Haptic Model Predictive Control for Robot-Assisted Dressing

    Full text link
    Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.Comment: 8 pages, 12 figures, 1 table, 2018 IEEE International Conference on Robotics and Automation (ICRA

    Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing

    Get PDF
    Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot's end effector. A neural network model estimates the position of the closest point on a person's limb and the orientation of the limb's central axis relative to the sensor's frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living-dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants' right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants' limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation Robotics 201

    Assistive VR Gym: Interactions with Real People to Improve Virtual Assistive Robots

    Full text link
    Versatile robotic caregivers could benefit millions of people worldwide, including older adults and people with disabilities. Recent work has explored how robotic caregivers can learn to interact with people through physics simulations, yet transferring what has been learned to real robots remains challenging. Virtual reality (VR) has the potential to help bridge the gap between simulations and the real world. We present Assistive VR Gym (AVR Gym), which enables real people to interact with virtual assistive robots. We also provide evidence that AVR Gym can help researchers improve the performance of simulation-trained assistive robots with real people. Prior to AVR Gym, we trained robot control policies (Original Policies) solely in simulation for four robotic caregiving tasks (robot-assisted feeding, drinking, itch scratching, and bed bathing) with two simulated robots (PR2 from Willow Garage and Jaco from Kinova). With AVR Gym, we developed Revised Policies based on insights gained from testing the Original policies with real people. Through a formal study with eight participants in AVR Gym, we found that the Original policies performed poorly, the Revised policies performed significantly better, and that improvements to the biomechanical models used to train the Revised policies resulted in simulated people that better match real participants. Notably, participants significantly disagreed that the Original policies were successful at assistance, but significantly agreed that the Revised policies were successful at assistance. Overall, our results suggest that VR can be used to improve the performance of simulation-trained control policies with real people without putting people at risk, thereby serving as a valuable stepping stone to real robotic assistance.Comment: IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), 8 pages, 8 figures, 2 table

    Real-Time Numerical Simulation for Accurate Soft Tissues Modeling during Haptic Interaction

    Get PDF
    The simulation of fabrics physics and its interaction with the human body has been largely studied in recent years to provide realistic-looking garments and wears specifically in the entertainment business. When the purpose of the simulation is to obtain scientific measures and detailed mechanical properties of the interaction, the underlying physical models should be enhanced to obtain better simulation accuracy increasing the modeling complexity and relaxing the simulation timing constraints to properly solve the set of equations under analysis. However, in the specific field of haptic interaction, the desiderata are to have both physical consistency and high frame rate to display stable and coherent stimuli as feedback to the user requiring a tradeoff between accuracy and real-time interaction. This work introduces a haptic system for the evaluation of the fabric hand of specific garments either existing or yet to be produced in a virtual reality simulation. The modeling is based on the co-rotational Finite Element approach that allows for large displacements but the small deformation of the elements. The proposed system can be beneficial for the fabrics industry both in the design phase or in the presentation phase, where a virtual fabric portfolio can be shown to customers around the world. Results exhibit the feasibility of high-frequency real-time simulation for haptic interaction with virtual garments employing realistic mechanical properties of the fabric materials

    Contact force regulation in physical human-machine interaction based on model predictive control

    Get PDF
    With increasing attention to physical human-machine interaction (pHMI), new control methods involving contact force regulation in collaborative and coexistence scenarios have spread in recent years. Thanks to its internal robustness, high dynamic performance, and capabilities to avoid constraint violations, a Model Predictive Control (MPC) action can pose a viable solution to manage the uncertainties involved in those applications. This paper uses an MPC-driven control method that aims to apply a well-defined and tunable force impulse on a human subject. After describing a general control design suitable to achieve this goal, a practical implementation of such a logic, based on an MPC controller, is shown. In particular, the physical interaction considered is the one occurring between the body of a patient and an external perturbation device in a dynamic posturography trial. The device prototype is presented in both its hardware architecture and software design. The MPC-based main control parameters are thus tuned inside hardware-in-the-loop and human-in-the-loop environments to get optimal behaviors. Finally, the device performance is analyzed to assess the MPC algorithm’s accuracy, repeatability, flexibility, and robustness concerning the several uncertainties due to the specific pHMI environment considered
    corecore