1 research outputs found

    Reward-Driven Learning of Sensorimotor Laws and Visual Features

    No full text
    Abstract—A frequently reoccurring task of humanoid robots is the autonomous navigation towards a goal position. Here we present a simulation of a purely vision-based docking behavior in a 3-D physical world. The robot learns sensorimotor laws and visual features simultaneously and exploits both for navigation towards its virtual target region. The control laws are trained using a two-layer network consisting of a feature (sensory) layer that feeds into an action (Q-value) layer. A reinforcement feedback signal (delta) modulates not only the action but at the same time the feature weights. Under this influence, the network learns interpretable visual features and assigns goaldirected actions successfully. This is a step towards investigating how reinforcement learning can be linked to visual perception. I
    corecore