5,222 research outputs found

    "Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot

    Get PDF
    "Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception

    From virtual demonstration to real-world manipulation using LSTM and MDN

    Full text link
    Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes

    Inter-Joint Coordination Deficits Revealed in the Decomposition of Endpoint Jerk During Goal-Directed Arm Movement After Stroke

    Get PDF
    It is well documented that neurological deficits after stroke can disrupt motor control processes that affect the smoothness of reaching movements. The smoothness of hand trajectories during multi-joint reaching depends on shoulder and elbow joint angular velocities and their successive derivatives as well as on the instantaneous arm configuration and its rate of change. Right-handed survivors of unilateral hemiparetic stroke and neurologically-intact control participants held the handle of a two-joint robot and made horizontal planar reaching movements. We decomposed endpoint jerk into components related to shoulder and elbow joint angular velocity, acceleration, and jerk. We observed an abnormal decomposition pattern in the most severely impaired stroke survivors consistent with deficits of inter-joint coordination. We then used numerical simulations of reaching movements to test whether the specific pattern of inter-joint coordination deficits observed experimentally could be explained by either a general increase in motor noise related to weakness or by an impaired ability to compensate for multi-joint interaction torque. Simulation results suggest that observed deficits in movement smoothness after stroke more likely reflect an impaired ability to compensate for multi-joint interaction torques rather than the mere presence of elevated motor noise

    Learning Multimodal Latent Dynamics for Human-Robot Interaction

    Full text link
    This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI). We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents. We leverage the interaction dynamics learned from HHI to learn HRI and incorporate the conditional generation of robot motions from human observations into the training, thereby predicting more accurate robot trajectories. The generated robot motions are further adapted with Inverse Kinematics to ensure the desired physical proximity with a human, combining the ease of joint space learning and accurate task space reachability. For contact-rich interactions, we modulate the robot's stiffness using HMM segmentation for a compliant interaction. We verify the effectiveness of our approach deployed on a Humanoid robot via a user study. Our method generalizes well to various humans despite being trained on data from just two humans. We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.Comment: 20 Pages, 10 Figure

    Utilizing the intelligence edge framework for robotic upper limb rehabilitation in home

    Get PDF
    Robotic devices are gaining popularity for the physical rehabilitation of stroke survivors. Transition of these robotic systems from research labs to the clinical setting has been successful, however, providing robot-assisted rehabilitation in home settings remains to be achieved. In addition to ensure safety to the users, other important issues that need to be addressed are the real time monitoring of the installed instruments, remote supervision by a therapist, optimal data transmission and processing. The goal of this paper is to advance the current state of robot-assisted in-home rehabilitation. A state-of-the-art approach to implement a novel paradigm for home-based training of stroke survivors in the context of an upper limb rehabilitation robot system is presented in this paper. First, a cost effective and easy-to-wear upper limb robotic orthosis for home settings is introduced. Then, a framework of the internet of robotics things (IoRT) is discussed together with its implementation. Experimental results are included from a proof-of-concept study demonstrating that the means of absolute errors in predicting wrist, elbow and shoulder angles are 0.89180,2.67530 and 8.02580, respectively. These experimental results demonstrate the feasibility of a safe home-based training paradigm for stroke survivors. The proposed framework will help overcome the technological barriers, being relevant for IT experts in health-related domains and pave the way to setting up a telerehabilitation system increasing implementation of home-based robotic rehabilitation. The proposed novel framework includes: • A low-cost and easy to wear upper limb robotic orthosis which is suitable for use at home. • A paradigm of IoRT which is used in conjunction with the robotic orthosis for home-based rehabilitation. • A machine learning-based protocol which combines and analyse the data from robot sensors for efficient and quick decision making

    Master of Science

    Get PDF
    thesisAdvances in the field of robotics have laid a solid foundation for human-robot-interaction research; this research values demonstrations of emotional competence from robotic systems and herein lie opportunities for progress within the therapeutic industry, creation of companion robots, and integration of robotics among everyday households. The development of emotive expression within robotics is progressing at a fair pace; however, there is next to no research on this form of expression as it pertains to a robot's manner of walking. The work presented here proves that it is possible for robots to walk with the capability of expressing emotions that are identifiable by their human counterparts. This hypothesis is explored utilizing a four-legged robot in simulation and reality, and the details necessary for this application are presented in this work. This quadruped is comprised of four manipulators each consisting of seven degrees of freedom. The inverse kinematics and dynamics are solved for each leg with closed form solutions that incorporate the inverse of Euler's finite rotation formula. With the kinematics solved, the robot utilizes a central pattern generator to create a neutral gait and balances with an augmented center of pressure that closely resembles the zero moment point algorithm. Independent of the kinematics, a method of generating poses that represent the emotions: happy, sad, angry, and fearful, is presented. This work also details how to overlay poses atop a gait to transform the neutral gait into an emotive walking style. In addition to laying the framework for developing the emotive walking styles, an evaluation of the presented gaits is detailed. Two IRB approved studies were performed independently of each other. The first study took feedback from subjects regarding ways to make the emotive gaits more compelling and applied them to the initial poses. The second study evaluated the effectiveness of the final gaits, with improved poses, and proves that emotive walking patterns were created; walking patterns that will be suitable for emotional acuity

    Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing

    Get PDF
    Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot's end effector. A neural network model estimates the position of the closest point on a person's limb and the orientation of the limb's central axis relative to the sensor's frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living-dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants' right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants' limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation Robotics 201
    corecore