26,926 research outputs found
Connectionist simulation of attitude learning: Asymmetries in the acquisition of positive and negative evaluations
Connectionist computer simulation was employed to explore the notion that, if attitudes guide approach and avoidance behaviors, false negative beliefs are likely to remain uncorrected for longer than false positive beliefs. In Study 1, the authors trained a three-layer neural network to discriminate "good" and "bad" inputs distributed across a two-dimensional space. "Full feedback" training, whereby connection weights were modified to reduce error after every trial, resulted in perfect discrimination. "Contingent feedback," whereby connection weights were only updated following outputs representing approach behavior, led to several false negative errors (good inputs misclassified as bad). In Study 2, the network was redesigned to distinguish a system for learning evaluations from a mechanism for selecting actions. Biasing action selection toward approach eliminated the asymmetry between learning of good and bad inputs under contingent feedback. Implications for various attitudinal phenomena and biases in social cognition are discussed
Rule learning enhances structural plasticity of long-range axons in frontal cortex.
Rules encompass cue-action-outcome associations used to guide decisions and strategies in a specific context. Subregions of the frontal cortex including the orbitofrontal cortex (OFC) and dorsomedial prefrontal cortex (dmPFC) are implicated in rule learning, although changes in structural connectivity underlying rule learning are poorly understood. We imaged OFC axonal projections to dmPFC during training in a multiple choice foraging task and used a reinforcement learning model to quantify explore-exploit strategy use and prediction error magnitude. Here we show that rule training, but not experience of reward alone, enhances OFC bouton plasticity. Baseline bouton density and gains during training correlate with rule exploitation, while bouton loss correlates with exploration and scales with the magnitude of experienced prediction errors. We conclude that rule learning sculpts frontal cortex interconnectivity and adjusts a thermostat for the explore-exploit balance
Robot pain: a speculative review of its functions
Given the scarce bibliography dealing explicitly with robot pain, this chapter has enriched its review with related research works about robot behaviours and capacities in which pain could play a role. It is shown that all such roles Âżranging from punishment to intrinsic motivation and planning knowledgeÂż can be formulated within the unified framework of reinforcement learning.Peer ReviewedPostprint (author's final draft
Recommended from our members
The value of novelty in schizophrenia
Influential models of schizophrenia suggest that patients experience incoming stimuli as excessively novel and motivating, with important consequences for hallucinatory experience and delusional belief. However, whether schizophrenia patients exhibit excessive novelty value and whether this interferes with adaptive behaviour has not yet been formally tested. Here, we employed a three-armed bandit task to investigate this hypothesis. Schizophrenia patients and healthy controls were first familiarised with a group of images and then asked to repeatedly choose between familiar and unfamiliar images associated with different monetary reward probabilities. By fitting a reinforcement-learning model we were able to estimate the values attributed to familiar and unfamiliar images when first presented in the context of the decision-making task. In line with our hypothesis, we found increased preference for newly introduced images (irrespective of whether these were familiar or unfamiliar) in patients compared to healthy controls and this to correlate with severity of hallucinatory experience. In addition, we found a correlation between value assigned to novel images and task performance, suggesting that excessive novelty value may interfere with optimal learning in patients, putatively through the disruption of the mechanisms regulating exploration versus exploitation. Our results suggest excessive novelty value in patients, whereby even previously seen stimuli acquire higher value as the result of their exposure in a novel context â a form of âhyper noveltyâ which may explain why patients are often attracted by familiar stimuli experienced as new
Intrinsic Motivation Systems for Autonomous Mental Development
Exploratory activities seem to be intrinsically rewarding
for children and crucial for their cognitive development.
Can a machine be endowed with such an intrinsic motivation
system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robotâs activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations
which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology.
Key words: Active learning, autonomy, behavior, complexity,
curiosity, development, developmental trajectory, epigenetic
robotics, intrinsic motivation, learning, reinforcement learning,
values
- âŠ