48 research outputs found

    On the notion of motor primitives in humans and robots

    Get PDF
    This article reviews two reflexive motor patterns in humans: Primitive reflexes and motor primitives. Both terms coexist in the literature of motor development and motor control, yet they are not synonyms. While primitive reflexes are a part of the temporary motor repertoire in early ontogeny, motor primitives refer to sets of motor patterns that are considered basic units of voluntary motor control thought to be present throughout the life-span. The article provides an overview of the anatomy and neurophysiology of human reflexive motor patterns to elucidate that both concepts are rooted in architecture of the spinal cord. I will advocate that an understanding of the human motor system that encompasses both primitive reflexes and motor primitives as well as the interaction with supraspinal motor centers will lead to an appreciation of the richness of the human motor repertoire, which in turn seems imperative for designing epigenetic robots and highly adaptable human machine interfaces

    Functional Musical Sonification for Chronic Pain Support

    Get PDF
    Chronic pain causes substantial disability, and people living with chronic pain often use protective behaviours and movements to minimize pain and worry about exacerbating pain during everyday activities such as loading the washing machine. We present work in progress on ubiquitous interactive sonification of body movement to help people with chronic pain to understand and positively modify their movements in clinical and functional situations. The sonification blends informational and aesthetic aspects and is intended for daily use

    Learning Latent Space Dynamics for Tactile Servoing

    Full text link
    To achieve a dexterous robotic manipulation, we need to endow our robot with tactile feedback capability, i.e. the ability to drive action based on tactile sensing. In this paper, we specifically address the challenge of tactile servoing, i.e. given the current tactile sensing and a target/goal tactile sensing --memorized from a successful task execution in the past-- what is the action that will bring the current tactile sensing to move closer towards the target tactile sensing at the next time step. We develop a data-driven approach to acquire a dynamics model for tactile servoing by learning from demonstration. Moreover, our method represents the tactile sensing information as to lie on a surface --or a 2D manifold-- and perform a manifold learning, making it applicable to any tactile skin geometry. We evaluate our method on a contact point tracking task using a robot equipped with a tactile finger. A video demonstrating our approach can be seen in https://youtu.be/0QK0-Vx7WkIComment: Accepted to be published at the International Conference on Robotics and Automation (ICRA) 2019. The final version for publication at ICRA 2019 is 7 pages (i.e. 6 pages of technical content (including text, figures, tables, acknowledgement, etc.) and 1 page of the Bibliography/References), while this arXiv version is 8 pages (added Appendix and some extra details

    Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

    Full text link
    In order to robustly execute a task under environmental uncertainty, a robot needs to be able to reactively adapt to changes arising in its environment. The environment changes are usually reflected in deviation from expected sensory traces. These deviations in sensory traces can be used to drive the motion adaptation, and for this purpose, a feedback model is required. The feedback model maps the deviations in sensory traces to the motion plan adaptation. In this paper, we develop a general data-driven framework for learning a feedback model from demonstrations. We utilize a variant of a radial basis function network structure --with movement phases as kernel centers-- which can generally be applied to represent any feedback models for movement primitives. To demonstrate the effectiveness of our framework, we test it on the task of scraping on a tilt board. In this task, we are learning a reactive policy in the form of orientation adaptation, based on deviations of tactile sensor traces. As a proof of concept of our method, we provide evaluations on an anthropomorphic robot. A video demonstrating our approach and its results can be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on Robotics and Automation (ICRA) 201

    Hierarchical Spatio-Temporal Morphable Models for Representation of complex movements for Imitation Learning

    No full text
    Imitation learning is a promising technique for teaching robots complex movement sequences. One key problem in this area is the transfer of perceived movement characteristics from perception to action. For the solution of this problem, representations are required that are suitable for the analysis and the synthesis of complex action sequences. We describe the method of Hierarchical Spatio-Temporal Morphable Models that allows an automatic segmentation of movements sequences into movement primitives, and a modeling of these primitives by morphing between a set of prototypical trajectories. We use HSTMMs in an imitation learning task for human writing movements. The models are learned from recorded trajectories and transferred to a human-like robot arm. Due to the generalization proper- ties of our movement representation, the arm is capable of synthesizing new writing movements with only a few learning examples

    Automatic primitive finding for action modeling

    Get PDF
    corecore