4 research outputs found

    Developmental Human-Robot Imitation Learning of Drawing with a Neuro Dynamical System

    Full text link
    Abstract—This paper mainly deals with influences of teach-ing style and developmental processes in learning model to the acquired representations (primitives). We investigate these in-fluences by introducing a hierarchical recurrent neural network for robot model, and a form of motionese (a caregiver’s use of simpler and more exaggerated motions when showing a task to an infants). We modified a Multiple Timescales Recurrent Neural Network (MTRNN) for robot’s self-model. The number of layers in the MTRNN increases according to learn complex events. We investigate our approach with a humanoid robot “Actroid ” through conducting an imitation experiment in which a human caregiver gives the robot a task of pushing two buttons. Experiment results and analysis confirm that learning with phased teaching and structuring enables to acquire the clear motion primitives as the activities in the fast context layer of MTRNN and to the robot to handle unknown motions. I

    Multimodal Imitation using Self-learned Sensorimotor Representations

    No full text
    Although many tasks intrinsically involve multiple modalities, often only data from a single modality are used to improve complex robots acquisition of new skills. We present a method to equip robots with multimodal learning skills to achieve multimodal imitation on-the-fly on multiple concurrent task spaces, including vision, touch and proprioception, only using self-learned multimodal sensorimotor relations, without the need of solving inverse kinematic problems or explicit analytical models formulation. We evaluate the proposed method on a humanoid iCub robot learning to interact with a piano keyboard and imitating a human demonstration. Since no assumptions are made on the kinematic structure of the robot, the method can be also applied to different robotic platforms
    corecore