2,149 research outputs found

    Properties for Efficient Demonstrations to a Socially Guided Intrinsically Motivated Learner

    Get PDF
    International audienceThe combination of learning by intrinsic moti- vation and social learning has been shown to improve the learner's performance and gain precision over a wider range of motor skills, with for instance the SGIM-D learning algorithm. Nevertheless, this bootstrapping a-priori depends on the demonstrations made by the teacher. We propose in this paper to examine this dependence: to what extend the quality of the demonstrations can influence the learning performance, and which are the characteristics of a good demonstrator. Results on a fishing experiment highlights the importance of the difficulty of the demonstrated tasks, as well as the structure of the actions demonstrated

    Interactive learning gives the tempo to an intrinsically motivated robot learner

    Get PDF
    International audienceThis paper studies an interactive learning system that couples internally guided learning and social interaction for robot learning of motor skills. We present Socially Guided Intrinsic Motivation with Interactive learning at the Meta level (SGIM-IM), an algorithm for learning forward and inverse models in high-dimensional, continuous and non-preset environments. The robot actively self-determines: at a meta level a strategy, whether to choose active autonomous learning or social learning strategies; and at the task level a goal task in autonomous exploration. We illustrate through 2 experimental set-ups that SGIM-IM efficiently combines the advantages of social learning and intrinsic motivation to be able to produce a wide range of effects in the environment, and develop precise control policies in large spaces, while minimising its reliance on the teacher, and offering a flexible interaction framework with human

    Bootstrapping Intrinsically Motivated Learning with Human Demonstrations

    Get PDF
    This paper studies the coupling of internally guided learning and social interaction, and more specifically the improvement owing to demonstrations of the learning by intrinsic motivation. We present Socially Guided Intrinsic Motivation by Demonstration (SGIM-D), an algorithm for learning in continuous, unbounded and non-preset environments. After introducing social learning and intrinsic motivation, we describe the design of our algorithm, before showing through a fishing experiment that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation to gain a wide repertoire while being specialised in specific subspaces.Comment: IEEE International Conference on Development and Learning, Frankfurt : Germany (2011

    Socially Guided Intrinsic Motivation for Robot Learning of Motor Skills

    Get PDF
    International audienceThis paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line , we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces

    Socially Guided Intrinsically Motivated Learner

    Get PDF
    International audienceThis paper studies the coupling of two learning strategies: internally guided learning and social interaction. We present Socially Guided Intrinsic Motivation by Demonstration (SGIM-D) and its interactive learner version Socially Guided Intrinsic Motivation with Interactive learning at the Meta level (SGIM-IM), which are algorithms for learning inverse models in high dimensional continuous sensorimotor spaces. After describing the general framework of our algorithms, we illustrate with a fishing experiment

    Interactive Learning Gives the Tempo to an Intrinsically Motivated Robot Learner

    Get PDF
    International audienceThis paper studies an interactive learning system that couples internally guided learning and social interaction for robot learning of motor skills. We present Socially Guided Intrinsic Motivation with Interactive learning at the Meta level (SGIM-IM), an algorithm for learning forward and inverse models in high-dimensional, continuous and non-preset environments. The robot actively self-determines: at a meta level a strategy, whether to choose active autonomous learning or social learning strategies; and at the task level a goal task in autonomous exploration. We illustrate through 2 experimental set-ups that SGIM-IM efficiently combines the advantages of social learning and intrinsic motivation to be able to produce a wide range of effects in the environment, and develop precise control policies in large spaces, while minimising its reliance on the teacher, and offering a flexible interaction framework with human

    Active Choice of Teachers, Learning Strategies and Goals for a Socially Guided Intrinsic Motivation Learner

    Get PDF
    International audienceWe present an active learning architecture that allows a robot to actively learn which data collection strategy is most efficient for acquiring motor skills to achieve multiple outcomes, and generalise over its experience to achieve new outcomes. The robot explores its environment both via interactive learning and goal-babbling. It learns at the same time when, who and what to actively imitate from several available teachers, and learns when not to use social guidance but use active goal-oriented self-exploration. This is formalised in the framework of life-long strategic learning. The proposed architecture, called Socially Guided Intrinsic Motivation with Active Choice of Teacher and Strategy (SGIM-ACTS), relies on hierarchical active decisions of what and how to learn driven by empirical evaluation of learning progress for each learning strategy. We illustrate with an experiment where a simulated robot learns to control its arm for realising two kinds of different outcomes. It has to choose actively and hierarchically at each learning episode: 1) what to learn: which outcome is most interesting to select as a goal to focus on for goal-directed exploration; 2) how to learn: which data collection strategy to use among self-exploration, mimicry and emulation; 3) once he has decided when and what to imitate by choosing mimicry or emulation, then he has to choose who to imitate, from a set of different teachers. We show that SGIM-ACTS learns significantly more efficiently than using single learning strategies, and coherently selects the best strategy with respect to the chosen outcome, taking advantage of the available teachers (with different levels of skills)

    CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning

    Get PDF
    In open-ended environments, autonomous learning agents must set their own goals and build their own curriculum through an intrinsically motivated exploration. They may consider a large diversity of goals, aiming to discover what is controllable in their environments, and what is not. Because some goals might prove easy and some impossible, agents must actively select which goal to practice at any moment, to maximize their overall mastery on the set of learnable goals. This paper proposes CURIOUS, an algorithm that leverages 1) a modular Universal Value Function Approximator with hindsight learning to achieve a diversity of goals of different kinds within a unique policy and 2) an automated curriculum learning mechanism that biases the attention of the agent towards goals maximizing the absolute learning progress. Agents focus sequentially on goals of increasing complexity, and focus back on goals that are being forgotten. Experiments conducted in a new modular-goal robotic environment show the resulting developmental self-organization of a learning curriculum, and demonstrate properties of robustness to distracting goals, forgetting and changes in body properties.Comment: Accepted at ICML 201
    corecore