455 research outputs found

    Assistive VR Gym: Interactions with Real People to Improve Virtual Assistive Robots

    Full text link
    Versatile robotic caregivers could benefit millions of people worldwide, including older adults and people with disabilities. Recent work has explored how robotic caregivers can learn to interact with people through physics simulations, yet transferring what has been learned to real robots remains challenging. Virtual reality (VR) has the potential to help bridge the gap between simulations and the real world. We present Assistive VR Gym (AVR Gym), which enables real people to interact with virtual assistive robots. We also provide evidence that AVR Gym can help researchers improve the performance of simulation-trained assistive robots with real people. Prior to AVR Gym, we trained robot control policies (Original Policies) solely in simulation for four robotic caregiving tasks (robot-assisted feeding, drinking, itch scratching, and bed bathing) with two simulated robots (PR2 from Willow Garage and Jaco from Kinova). With AVR Gym, we developed Revised Policies based on insights gained from testing the Original policies with real people. Through a formal study with eight participants in AVR Gym, we found that the Original policies performed poorly, the Revised policies performed significantly better, and that improvements to the biomechanical models used to train the Revised policies resulted in simulated people that better match real participants. Notably, participants significantly disagreed that the Original policies were successful at assistance, but significantly agreed that the Revised policies were successful at assistance. Overall, our results suggest that VR can be used to improve the performance of simulation-trained control policies with real people without putting people at risk, thereby serving as a valuable stepping stone to real robotic assistance.Comment: IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), 8 pages, 8 figures, 2 table

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces

    Predictive and Robust Robot Assistance for Sequential Manipulation

    Full text link
    This paper presents a novel concept to support physically impaired humans in daily object manipulation tasks with a robot. Given a user's manipulation sequence, we propose a predictive model that uniquely casts the user's sequential behavior as well as a robot support intervention into a hierarchical multi-objective optimization problem. A major contribution is the prediction formulation, which allows to consider several different future paths concurrently. The second contribution is the encoding of a general notion of constancy constraints, which allows to consider dependencies between consecutive or far apart keyframes (in time or space) of a sequential task. We perform numerical studies, simulations and robot experiments to analyse and evaluate the proposed method in several table top tasks where a robot supports impaired users by predicting their posture and proactively re-arranging objects

    Are preferences useful for better assistance? A physically assistive robotics user study

    Get PDF
    © 2021 Copyright held by the owner/author(s).Assistive Robots have an inherent need of adapting to the user they are assisting. This is crucial for the correct development of the task, user safety, and comfort. However, adaptation can be performed in several manners. We believe user preferences are key to this adaptation. In this paper, we evaluate the use of preferences for Physically Assistive Robotics tasks in a Human-Robot Interaction user evaluation. Three assistive tasks have been implemented consisting of assisted feeding, shoe-fitting, and jacket dressing, where the robot performs each task in a different manner based on user preferences. We assess the ability of the users to determine which execution of the task used their chosen preferences (if any). The obtained results show that most of the users were able to successfully guess the cases where their preferences were used even when they had not seen the task before. We also observe that their satisfaction with the task increases when the chosen preferences are employed. Finally, we also analyze the user’s opinions regarding assistive tasks and preferences, showing promising expectations as to the benefits of adapting the robot behavior to the user through preferences.This work has been supported by the ERC project Clothilde (ERC-2016-ADG-741930), the HuMoUR project (Spanish Ministry of Science and Innovation TIN2017-90086-R) and by the Spanish State Research Agency through the María de Maeztu Seal of Excellence to IRI (MDM-2016-0656). Gerard Canal has also been supported by the Spanish Ministry of Education, Culture and Sport by the FPU15/00504 doctoral grant and the CHIST-ERA project COHERENT (EPSRC EP/V062506/1).Peer ReviewedPostprint (published version

    Clothes Grasping and Unfolding Based on RGB-D Semantic Segmentation

    Get PDF

    Elastic Context: Encoding Elasticity for Data-driven Models of Textiles

    Full text link
    Physical interaction with textiles, such as assistive dressing, relies on advanced dextreous capabilities. The underlying complexity in textile behavior when being pulled and stretched, is due to both the yarn material properties and the textile construction technique. Today, there are no commonly adopted and annotated datasets on which the various interaction or property identification methods are assessed. One important property that affects the interaction is material elasticity that results from both the yarn material and construction technique: these two are intertwined and, if not known a-priori, almost impossible to identify through sensing commonly available on robotic platforms. We introduce Elastic Context (EC), a concept that integrates various properties that affect elastic behavior, to enable a more effective physical interaction with textiles. The definition of EC relies on stress/strain curves commonly used in textile engineering, which we reformulated for robotic applications. We employ EC using Graph Neural Network (GNN) to learn generalized elastic behaviors of textiles. Furthermore, we explore the effect the dimension of the EC has on accurate force modeling of non-linear real-world elastic behaviors, highlighting the challenges of current robotic setups to sense textile properties

    Learning garment manipulation policies toward robot-assisted dressing.

    Get PDF
    Assistive robots have the potential to support people with disabilities in a variety of activities of daily living, such as dressing. People who have completely lost their upper limb movement functionality may benefit from robot-assisted dressing, which involves complex deformable garment manipulation. Here, we report a dressing pipeline intended for these people and experimentally validate it on a medical training manikin. The pipeline is composed of the robot grasping a hospital gown hung on a rail, fully unfolding the gown, navigating around a bed, and lifting up the user's arms in sequence to finally dress the user. To automate this pipeline, we address two fundamental challenges: first, learning manipulation policies to bring the garment from an uncertain state into a configuration that facilitates robust dressing; second, transferring the deformable object manipulation policies learned in simulation to real world to leverage cost-effective data generation. We tackle the first challenge by proposing an active pre-grasp manipulation approach that learns to isolate the garment grasping area before grasping. The approach combines prehensile and nonprehensile actions and thus alleviates grasping-only behavioral uncertainties. For the second challenge, we bridge the sim-to-real gap of deformable object policy transfer by approximating the simulator to real-world garment physics. A contrastive neural network is introduced to compare pairs of real and simulated garment observations, measure their physical similarity, and account for simulator parameters inaccuracies. The proposed method enables a dual-arm robot to put back-opening hospital gowns onto a medical manikin with a success rate of more than 90%

    Adapting robot behavior to user preferences in assistive scenarios

    Get PDF
    Robotic assistants have inspired numerous books and science fiction movies. In the real world, these kinds of devices are a growing need in amongst the elderly, who while life continue requiring more assistance. While life expectancy is increasing, life quality is not necessarily doing so. Thus, we may find ourselves and our loved ones being dependent and needing another person to perform the most basic tasks, which has a strong psychological impact. Accordingly, assistive robots may be the definitive tool to give more quality of life by empowering dependent people and extending their independent living. Assisting users to perform daily activities requires adapting to them and their needs, as they might not be able to adapt to the robot. This thesis tackles adaptation and personalization issues through user preferences. We 'focus on physical tasks that involve close contact, as these present interesting challenges, and are of great importance for he user. Therefore, three tasks are mainly used throughout the thesis: assistive feeding, shoe fitting, and jacket dressing. We first describe a framework for robot behavior adaptation that illustrates how robots should be personalized for and by end- users or their assistants. Using this framework, non-technical users determine how !he robot should behave. Then, we define the concept of preference for assistive robotics scenarios and establish a taxonomy, which includes hierarchies and groups of preferences, grounding definitions and concepts. We then show how the preferences in the taxonomy are used with Al planning systems to adapt the robot behavior to the preferences of the user obtained from simple questions. Our algorithms allow for long-term adaptations as well as to cope with misinformed user models. We further integrate the methods with low-level motion primitives that provide a more robust adaptation and behavior while lowering the number of needed actions and demonstrations. Moreover, we perform a deeper analysis in Planning and preferences with the introduction of new algorithms to provide preference suggestions in planning domains. The thesis then concludes with a user study that evaluates the use of the preferences in the three real assistive robotics scenarios. The experiments show a clear understanding of the preferences of users, who were able to assess the impact of their preferences on the behavior of the robot. In summary, we provide tools and algorithms to design the robotic assistants of the future. Assistants that should be able to adapt to the assisted user needs and preferences, just as human assistants do nowadays.Els assistents robòtics han inspirat nombrosos llibres i pel·lícules de ciència-ficció al llarg de la història. Però tornant al món real, aquest tipus de dispositius s'estan tornant una necessitat per a una societat que envelleix a un ritme ràpid i que, per tant, requerirà més i més assistència. Mentre l'esperança de vida augmenta, la qualitat de vida no necessàriament ho fa. Per tant, ens podem trobar a nosaltres mateixos i als nostres estimats en una situació de dependència, necessitant una altra persona per poder fer les tasques més bàsiques, cosa que té un gran impacte psicològic. En conseqüència, els robots assistencials poden ser l'eina definitiva per proporcionar una millor qualitat de vida empoderant els usuaris i allargant la seva capacitat de viure independentment. L'assistència a persones per realitzar tasques diàries requereix adaptar-se a elles i les seves necessitats, donat que aquests usuaris no poden adaptar-se al robot. En aquesta tesi, abordem el problema de l'adaptació i la personalització d'un robot mitjançant preferències de l'usuari. Ens centrem en tasques físiques, que involucren contacte amb la persona, per les seves dificultats i importància per a l'usuari. Per aquest motiu, la tesi utilitzarà principalment tres tasques com a exemple: donar menjar, posar una sabata i vestir una jaqueta. Comencem definint un marc (framework) per a la personalització del comportament del robot que defineix com s'han de personalitzar els robots per usuaris i pels seus assistents. Amb aquest marc, usuaris sense coneixements tècnics són capaços de definir com s'ha de comportar el robot. Posteriorment definim el concepte de preferència per a robots assistencials i establim una taxonomia que inclou jerarquies i grups de preferències, els quals fonamenten les definicions i conceptes. Després mostrem com les preferències de la taxonomia s'utilitzen amb sistemes planificadors amb IA per adaptar el comportament del robot a les preferències de l'usuari, que s'obtenen mitjançant preguntes simples. Els nostres algorismes permeten l'adaptació a llarg termini, així com fer front a models d'usuari mal inferits. Aquests mètodes són integrats amb primitives a baix nivell que proporcionen una adaptació i comportament més robusts a la mateixa vegada que disminueixen el nombre d'accions i demostracions necessàries. També fem una anàlisi més profunda de l'ús de les preferències amb planificadors amb la introducció de nous algorismes per fer suggeriments de preferències en dominis de planificació. La tesi conclou amb un estudi amb usuaris que avalua l'ús de les preferències en les tres tasques assistencials. Els experiments demostren un clar enteniment de les preferències per part dels usuaris, que van ser capaços de discernir quan les seves preferències eren utilitzades. En resum, proporcionem eines i algorismes per dissenyar els assistents robòtics del futur. Uns assistents que haurien de ser capaços d'adaptar-se a les preferències i necessitats de l'usuari que assisteixen, tal com els assistents humans fan avui en dia
    • …
    corecore