77 research outputs found
Activity in Inferior Parietal and Medial Prefrontal Cortex Signals the Accumulation of Evidence in a Probability Learning Task
In an uncertain environment, probabilities are key to predicting future events and making adaptive choices. However, little is known about how humans learn such probabilities and where and how they are encoded in the brain, especially when they concern more than two outcomes. During functional magnetic resonance imaging (fMRI), young adults learned the probabilities of uncertain stimuli through repetitive sampling. Stimuli represented payoffs and participants had to predict their occurrence to maximize their earnings. Choices indicated loss and risk aversion but unbiased estimation of probabilities. BOLD response in medial prefrontal cortex and angular gyri increased linearly with the probability of the currently observed stimulus, untainted by its value. Connectivity analyses during rest and task revealed that these regions belonged to the default mode network. The activation of past outcomes in memory is evoked as a possible mechanism to explain the engagement of the default mode network in probability learning. A BOLD response relating to value was detected only at decision time, mainly in striatum. It is concluded that activity in inferior parietal and medial prefrontal cortex reflects the amount of evidence accumulated in favor of competing and uncertain outcomes
Carbamazepine inhibits angiotensin I-converting enzyme, linking it to the pathogenesis of temporal lobe epilepsy
We find that a common mutation that increases angiotensin I-converting enzyme activity occurs with higher frequency in male patients suffering from refractory temporal lobe epilepsy. However, in their brains, the activity of the enzyme is downregulated. As an explanation, we surprisingly find that carbamazepine, commonly used to treat epilepsy, is an inhibitor of the enzyme, thus providing a direct link between epilepsy and the renin–angiotensin and kallikrein–kinin systems
Attention-dependent modulation of cortical taste circuits revealed by granger causality with signal-dependent noise
We show, for the first time, that in cortical areas, for example the insular, orbitofrontal, and lateral prefrontal cortex, there is signal-dependent noise in the fMRI blood-oxygen level dependent (BOLD) time series, with the variance of the noise increasing approximately linearly with the square of the signal. Classical Granger causal models are based on autoregressive models with time invariant covariance structure, and thus do not take this signal-dependent noise into account. To address this limitation, here we describe a Granger causal model with signal-dependent noise, and a novel, likelihood ratio test for causal inferences. We apply this approach to the data from an fMRI study to investigate the source of the top-down attentional control of taste intensity and taste pleasantness processing. The Granger causality with signal-dependent noise analysis reveals effects not identified by classical Granger causal analysis. In particular, there is a top-down effect from the posterior lateral prefrontal cortex to the insular taste cortex during attention to intensity but not to pleasantness, and there is a top-down effect from the anterior and posterior lateral prefrontal cortex to the orbitofrontal cortex during attention to pleasantness but not to intensity. In addition, there is stronger forward effective connectivity from the insular taste cortex to the orbitofrontal cortex during attention to pleasantness than during attention to intensity. These findings indicate the importance of explicitly modeling signal-dependent noise in functional neuroimaging, and reveal some of the processes involved in a biased activation theory of selective attention
An Imperfect Dopaminergic Error Signal Can Drive Temporal-Difference Learning
An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards
- …