19 research outputs found

    Tactile Guidance for Policy Adaptation

    Full text link

    Tactile Correction and Multiple Training Data Sources for Robot Motion Control

    Get PDF
    This work considers our approach to robot motion control learning from the standpoint of multiple data sources. Our paradigm derives data from human teachers providing task demonstrations and tactile corrections for policy refinement and reuse. We contribute a novel formalization for this data, and identify future directions for the algorithm to reason explicitly about differences in data source

    A learning by imitation model handling multiple constraints and motion alternatives

    Get PDF
    We present a probabilistic approach to learn robust models of human motion through imitation. The association of Hidden Markov Model (HMM), Gaussian Mixture Regression (GMR) and dynamical systems allows us to extract redundancies across multiple demonstrations and build time-independent models to reproduce the dynamics of the demonstrated movements. The approach is first systematically evaluated and compared with other approaches by using generated trajectories sharing similarities with human gestures. Three applications on different types of robots are then presented. An experiment with the iCub humanoid robot acquiring a bimanual dancing motion is first presented to show that the system can also handle cyclic motion. An experiment with a 7 DOFs WAM robotic arm learning the motion of hitting a ball with a table tennis racket is presented to highlight the possibility to encode several variations of a movement in a single model. Finally, an experiment with a HOAP-3 humanoid robot learning to manipulate a spoon to feed the Robota humanoid robot is presented to demonstrate the capability of the system to handle several constraints simultaneously

    of

    No full text
    Three-dimensional frames of references transformations using recurrent population

    Ideomotor Compatibility: Investigating Imitative Cortical Pathways

    No full text
    Humans ’ capacity to imitate has been extensively investigated through a wide-range of behavioral and developmental studies. Yet despite the huge amount of phenomenological evidence gathered, we are stil

    Abstract

    No full text
    This work follows from a research project, in which we investigate the underlying mechanisms of human imitation and develop a neural model of its core neural circuits. The present paper presents a model of a neural mechanism by which an imitator agent can map movements of the end effector performed by other agents onto its own frame of reference. The model mechanism is validated in simulation and in a humanoid robot to perform a simple task, in which the robot imitates movements performed a human demonstrator.

    Dynamic Selectivity in a Continuous Attractor Model of Movement Generation

    No full text
    The ability to discriminate between self and others movements is an important ability at the basis of our capacity to relate to others socially and to learn by imitation {Billard2002}. Current body of evidence suggests a common neural substrate to the recognition and production of movements in both humans and monkeys {Iacoboni1999, Rizzolatti2001}. A behavioral correlate of such discovery, related in several psychophysics experiments {e.g. Kilner2003}, is that observing movements of others influences the quality of one's performance. The observation of such an interference effect, while supporting the view of a common pathway for the transfer of visuo-motor information, calls for an explanation as to how the same substrate can both integrate multi-sensory information and determine, i.e. select, the origin of the observed movement. Here, we show that such selective and integrative functions can be processed by a biologically plausible network composed of continuous attractor models. Indeed, such type of model, also known as neural fields, have been already studied at length by researchers addressing computational issues related to various brain regions concerned with, for instance, visual motion processing {Giese2000}, spatial navigation {Zhang1996, Xie2002} and decision making {Erlhagen2002}. We attempt to extend and merge the technical and conceptual contributions of these architectures in order to produce a model o
    corecore