5 research outputs found

    A study of two complementary encoding strategies based on learning by demonstration for autonomous navigation task

    Get PDF
    Learning by demonstration is a natural and interactive way of learning which can be used by non-experts to teach behaviors to robots. In this paper we study two learning by demon- stration strategies which give different an- swers about how to encode information and when to learn. The first strategy is based on artificial Neural Networks and focuses on reactive on-line learning. The second one uses Gaussian Mixture Models built on statistical features extracted off-line from several training datasets. A simple navigation experiment is used to compare the developmental possibilities of each strategy. Finally, they appear to be complementary and we will highlight that each one can be related to a specific memory structure in brain

    A learning by imitation model handling multiple constraints and motion alternatives

    Get PDF
    We present a probabilistic approach to learn robust models of human motion through imitation. The association of Hidden Markov Model (HMM), Gaussian Mixture Regression (GMR) and dynamical systems allows us to extract redundancies across multiple demonstrations and build time-independent models to reproduce the dynamics of the demonstrated movements. The approach is first systematically evaluated and compared with other approaches by using generated trajectories sharing similarities with human gestures. Three applications on different types of robots are then presented. An experiment with the iCub humanoid robot acquiring a bimanual dancing motion is first presented to show that the system can also handle cyclic motion. An experiment with a 7 DOFs WAM robotic arm learning the motion of hitting a ball with a table tennis racket is presented to highlight the possibility to encode several variations of a movement in a single model. Finally, an experiment with a HOAP-3 humanoid robot learning to manipulate a spoon to feed the Robota humanoid robot is presented to demonstrate the capability of the system to handle several constraints simultaneously

    Handling of multiple constraints and motion alternatives in a robot programming by demonstration framework

    No full text
    We consider the problem of learning robust models of robot motion through demonstration. An approach based on Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMR) is proposed to extract redundancies across multiple demonstrations, and build a time- independent model of a set of movements demonstrated by a human user. Two experiments are presented to validate the method, that consist of learning to hit a ball with a robotic arm, and of teaching a humanoid robot to manipulate a spoon to feed another humanoid. The experiments demonstrate that the proposed model can efficiently handle several aspects of learning by imitation. We first show that it can be utilized in an unsupervised learning manner, where the robot is autonomously organizing and encoding variants of motion from the multiple demonstrations. We then show that the approach allows to robustly generalize the observed skill by taking into account multiple constraints in task space during reproduction

    Learning and Reproduction of Gestures by Imitation

    No full text

    A state-action neural network supervising navigation and manipulation behaviors for complex task reproduction

    No full text
    In this abstract, we combine work from [Lagarde et al., 2010] and [Calinon et al., 2009] for learning and reproduction of, respectively, navigation tasks on a mobile robot and gestures with a robot arm. Both approaches build a sensory motor map under human guidance to learn the desired behavior. With several actions possible at the same time, the selec- tion of action becomes a real issue. Several solutions exist to this problem : hi- erarchical architecture, parallel modules including subsumption architectures or even a mix of both [Bryson, 2000]. In navigation, a temporal sequence learner or a state-action association learner [Lagarde et al., 2010] enables to learn a sequence of direc- tions in order to follow a trajectory. These solu- tions can be extended to action sequence learning. In this paper we propose a simple architecture based on perception-action that is able to produce complex behaviors from the incremental learning of simple tasks. Then we discuss advantages and limitations of this architecture, that raises many questions
    corecore