16 research outputs found

    A study of two complementary encoding strategies based on learning by demonstration for autonomous navigation task

    Get PDF
    Learning by demonstration is a natural and interactive way of learning which can be used by non-experts to teach behaviors to robots. In this paper we study two learning by demon- stration strategies which give different an- swers about how to encode information and when to learn. The first strategy is based on artificial Neural Networks and focuses on reactive on-line learning. The second one uses Gaussian Mixture Models built on statistical features extracted off-line from several training datasets. A simple navigation experiment is used to compare the developmental possibilities of each strategy. Finally, they appear to be complementary and we will highlight that each one can be related to a specific memory structure in brain

    On-line Learning and Control of Attraction Basins for the Development of Sensorimotor Control Strategies

    No full text
    International audienceImitation and learning from human require an adequate sensorimotor controller to learn and to encode the behaviors. We present the Dynamic Muscle PerAc (DM-PerAc) model to control a multi DOF robot arm. In the original Perception-Action (PerAc) model, path following or place reaching behaviors correspond to the sensorimotor attractors resulting from the dynamics of learned sensorimotor associations. The DM-PerAc model, inspired by human muscles, permits to combine impedance-like control with the capability of learning sensorimotor attraction basins. We detail a solution to incrementally learn on-line the DM-PerAc vi-suomotor controller. Postural attractors are learned by adapting the muscle activations in the model depending on movement errors. Visuomotor categories merging visual and pro-prioceptive signals are associated with these muscle activations. Thus, the visual and proprioceptive signals activate the motor action generating an attractor which satisfies both visual and proprioceptive constraints. This visuomotor controller can serve as a basis for imitative behaviors. Besides, the muscle activation patterns can define directions of movement instead of postural attractors. Such patterns can be used in state-action couples to generate trajectories like in the PerAc model. We discuss a possible extension of the DM-PerAc controller by adapting the Fukuyori's controller based on the Langevin's equation. This controller can serve not only to reach attractors which were not explicitly learned but also to learn the state/action couples to define trajectories

    A Neural Network Generating Force Command for Motor Control of a Robotic Arm

    No full text
    Papier court (3 pages)In this paper, we propose a bio-inspired torque controller based on a Neural Network architecture. This controller was used to reproduce demonstrated movements

    A Neural Network Generating Force Command for Motor Control of a Robotic Arm

    No full text
    Papier court (3 pages)In this paper, we propose a bio-inspired torque controller based on a Neural Network architecture. This controller was used to reproduce demonstrated movements

    A developmental approach of imitation to study the emergence of mirror neurons in a sensory-motor controller

    No full text
    Mirror neurons have often been considered as the explanation of how primates can imitate. In this paper, we show that a simple neural network architecture that learns visuo-motor associations can be enough to let low level imitation emerge without a priori mirror neurons. Adding sequence learning mechanisms and action inhibition allows to perform deferred imitation of gestures demonstrated visually or by body manipulation. With the building of a cognitive map giving the capability of learning plans, we can study in our model the emergence of both low level and high level resonances highlighted by Rizzolatti et al

    Behavior adaptation from negative social feedback based on goal awareness

    No full text
    International audienceRobots are expected to perform actions in a human environment where they will have to learn both how and when to act. Social human robot interaction could provide the robot with external feedback to guide them. In this paper, the focus is put on managing correctly negative signals thus stressing the importance of being aware of its own goal. In previous works, we developed bio-inspired models for action planning which enabled a robot to adapt its space representations and thus its behavior in the context of latent learning with rewards. Though, as the action selection is based on a local readout of a propagated gradient, the current goal is not explicitly available. To determine it, the implemented mechanisms are : first, to select and inhibit one of the potential goals and then, to monitor if this inhibition changes the current behavior of the agent. If so, the inhibited goal is the one pursued. As a result, negative signals can then be used to directly modulate the strength of the current goal and change the agent's behavior
    corecore