26,926 research outputs found

    Connectionist simulation of attitude learning: Asymmetries in the acquisition of positive and negative evaluations

    Get PDF
    Connectionist computer simulation was employed to explore the notion that, if attitudes guide approach and avoidance behaviors, false negative beliefs are likely to remain uncorrected for longer than false positive beliefs. In Study 1, the authors trained a three-layer neural network to discriminate "good" and "bad" inputs distributed across a two-dimensional space. "Full feedback" training, whereby connection weights were modified to reduce error after every trial, resulted in perfect discrimination. "Contingent feedback," whereby connection weights were only updated following outputs representing approach behavior, led to several false negative errors (good inputs misclassified as bad). In Study 2, the network was redesigned to distinguish a system for learning evaluations from a mechanism for selecting actions. Biasing action selection toward approach eliminated the asymmetry between learning of good and bad inputs under contingent feedback. Implications for various attitudinal phenomena and biases in social cognition are discussed

    Rule learning enhances structural plasticity of long-range axons in frontal cortex.

    Get PDF
    Rules encompass cue-action-outcome associations used to guide decisions and strategies in a specific context. Subregions of the frontal cortex including the orbitofrontal cortex (OFC) and dorsomedial prefrontal cortex (dmPFC) are implicated in rule learning, although changes in structural connectivity underlying rule learning are poorly understood. We imaged OFC axonal projections to dmPFC during training in a multiple choice foraging task and used a reinforcement learning model to quantify explore-exploit strategy use and prediction error magnitude. Here we show that rule training, but not experience of reward alone, enhances OFC bouton plasticity. Baseline bouton density and gains during training correlate with rule exploitation, while bouton loss correlates with exploration and scales with the magnitude of experienced prediction errors. We conclude that rule learning sculpts frontal cortex interconnectivity and adjusts a thermostat for the explore-exploit balance

    Robot pain: a speculative review of its functions

    Get PDF
    Given the scarce bibliography dealing explicitly with robot pain, this chapter has enriched its review with related research works about robot behaviours and capacities in which pain could play a role. It is shown that all such roles Âżranging from punishment to intrinsic motivation and planning knowledgeÂż can be formulated within the unified framework of reinforcement learning.Peer ReviewedPostprint (author's final draft

    Intrinsic Motivation Systems for Autonomous Mental Development

    Get PDF
    Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robot’s activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology. Key words: Active learning, autonomy, behavior, complexity, curiosity, development, developmental trajectory, epigenetic robotics, intrinsic motivation, learning, reinforcement learning, values
    • 

    corecore