13,276 research outputs found

    First Steps Toward a Computational Theory of Autism

    Get PDF
    A computational model with three interacting components for context sensitive reinforcement learning, context processing and automation can autonomously learn a focus attention and a shift attention task. The performance of the model is similar to that of normal children, and when a single parameter is changed, the performance on the two tasks approaches that of autistic children

    The emotional gatekeeper: a computational model of attentional selection and suppression through the pathway from the amygdala to the inhibitory thalamic reticular nucleus

    Get PDF
    In a complex environment that contains both opportunities and threats, it is important for an organism to flexibly direct attention based on current events and prior plans. The amygdala, the hub of the brain's emotional system, is involved in forming and signaling affective associations between stimuli and their consequences. The inhibitory thalamic reticular nucleus (TRN) is a hub of the attentional system that gates thalamo-cortical signaling. In the primate brain, a recently discovered pathway from the amygdala sends robust projections to TRN. Here we used computational modeling to demonstrate how the amygdala-TRN pathway, embedded in a wider neural circuit, can mediate selective attention guided by emotions. Our Emotional Gatekeeper model demonstrates how this circuit enables focused top-down, and flexible bottom-up, allocation of attention. The model suggests that the amygdala-TRN projection can serve as a unique mechanism for emotion-guided selection of signals sent to cortex for further processing. This inhibitory selection mechanism can mediate a powerful affective 'framing' effect that may lead to biased decision-making in highly charged emotional situations. The model also supports the idea that the amygdala can serve as a relevance detection system. Further, the model demonstrates how abnormal top-down drive and dysregulated local inhibition in the amygdala and in the cortex can contribute to the attentional symptoms that accompany several neuropsychiatric disorders.R01MH057414 - NIMH NIH HHS; R01 MH057414 - NIMH NIH HHS; R01 MH101209 - NIMH NIH HHS; R01NS024760 - NINDS NIH HHS; R01MH101209 - NIMH NIH HHS; R01 NS024760 - NINDS NIH HH

    A Biologically Plausible Learning Rule for Deep Learning in the Brain

    Get PDF
    Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? We demonstrate the learning scheme on classical and hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast as direct reward tasks, both for fully connected, convolutional and locally connected architectures. We show that our learning rule - Q-AGREL - performs comparably to supervised learning via error-backpropagation, with this type of trial-and-error reinforcement learning requiring only 1.5-2.5 times more epochs, even when classifying 100 different classes as in CIFAR100. Our results provide new insights into how deep learning may be implemented in the brain

    A neural network model of adaptively timed reinforcement learning and hippocampal dynamics

    Full text link
    A neural model is described of how adaptively timed reinforcement learning occurs. The adaptive timing circuit is suggested to exist in the hippocampus, and to involve convergence of dentate granule cells on CA3 pyramidal cells, and NMDA receptors. This circuit forms part of a model neural system for the coordinated control of recognition learning, reinforcement learning, and motor learning, whose properties clarify how an animal can learn to acquire a delayed reward. Behavioral and neural data are summarized in support of each processing stage of the system. The relevant anatomical sites are in thalamus, neocortex, hippocampus, hypothalamus, amygdala, and cerebellum. Cerebellar influences on motor learning are distinguished from hippocampal influences on adaptive timing of reinforcement learning. The model simulates how damage to the hippocampal formation disrupts adaptive timing, eliminates attentional blocking, and causes symptoms of medial temporal amnesia. It suggests how normal acquisition of subcortical emotional conditioning can occur after cortical ablation, even though extinction of emotional conditioning is retarded by cortical ablation. The model simulates how increasing the duration of an unconditioned stimulus increases the amplitude of emotional conditioning, but does not change adaptive timing; and how an increase in the intensity of a conditioned stimulus "speeds up the clock", but an increase in the intensity of an unconditioned stimulus does not. Computer simulations of the model fit parametric conditioning data, including a Weber law property and an inverted U property. Both primary and secondary adaptively timed conditioning are simulated, as are data concerning conditioning using multiple interstimulus intervals (ISIs), gradually or abruptly changing ISis, partial reinforcement, and multiple stimuli that lead to time-averaging of responses. Neurobiologically testable predictions are made to facilitate further tests of the model.Air Force Office of Scientific Research (90-0175, 90-0128); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-87-16960); Office of Naval Research (N00014-91-J-4100

    Distributed Hypothesis Testing, Attention Shifts and Transmitter Dynatmics During the Self-Organization of Brain Recognition Codes

    Full text link
    BP (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-90-00530); Air Force Office of Scientific Research (90-0175, 90-0128); Army Research Office (DAAL-03-88-K0088

    Task-specific effects of reward on task switching

    No full text
    Although cognitive control and reinforcement learning have been researched extensively over the last few decades, only recently have studies investigated their interrelationship. An important unanswered question concerns how the control system decides what task to execute and how vigorously to carry out the task once selected. Based on a recent theory of control formulated according to principles of hierarchical reinforcement learning, we asked whether rewards can affect top-down control over task performance at the level of task representation. Participants were rewarded for correctly performing only one of two tasks in a standard task-switching experiment. Reaction times and error rates were lower for the reinforced task compared to the non-reinforced task. Moreover, the switch cost in error rates for the non-reinforced task was significantly larger compared to the reinforced task, especially for trials in which the imperative stimulus afforded different responses for the two tasks, resulting in a "non-paradoxical" asymmetric switch cost. These findings suggest that reinforcement at the task level resulted in greater application of top-down control rather than in stronger stimulus-response pathways for the rewarded task
    corecore