63 research outputs found
Dynamic Integration of Reward and Stimulus Information in Perceptual Decision-Making
In perceptual decision-making, ideal decision-makers should bias their choices toward alternatives associated with larger rewards, and the extent of the bias should decrease as stimulus sensitivity increases. When responses must be made at different times after stimulus onset, stimulus sensitivity grows with time from zero to a final asymptotic level. Are decision makers able to produce responses that are more biased if they are made soon after stimulus onset, but less biased if they are made after more evidence has been accumulated? If so, how close to optimal can they come in doing this, and how might their performance be achieved mechanistically? We report an experiment in which the payoff for each alternative is indicated before stimulus onset. Processing time is controlled by a “go” cue occurring at different times post stimulus onset, requiring a response within msec. Reward bias does start high when processing time is short and decreases as sensitivity increases, leveling off at a non-zero value. However, the degree of bias is sub-optimal for shorter processing times. We present a mechanistic account of participants' performance within the framework of the leaky competing accumulator model [1], in which accumulators for each alternative accumulate noisy information subject to leakage and mutual inhibition. The leveling off of accuracy is attributed to mutual inhibition between the accumulators, allowing the accumulator that gathers the most evidence early in a trial to suppress the alternative. Three ways reward might affect decision making in this framework are considered. One of the three, in which reward affects the starting point of the evidence accumulation process, is consistent with the qualitative pattern of the observed reward bias effect, while the other two are not. Incorporating this assumption into the leaky competing accumulator model, we are able to provide close quantitative fits to individual participant data
Dynamics of HIV-1 Quasispecies during Antiviral Treatment Dissected Using Ultra-Deep Pyrosequencing
Background: Ultra-deep pyrosequencing (UDPS) allows identification of rare HIV-1 variants and minority drug resistance mutations, which are not detectable by standard sequencing. Principal Findings: Here, UDPS was used to analyze the dynamics of HIV-1 genetic variation in reverse transcriptase (RT) (amino acids 180–220) in six individuals consecutively sampled before, during and after failing 3TC and AZT containing antiretroviral treatment. Optimized UDPS protocols and bioinformatic software were developed to generate, clean and analyze the data. The data cleaning strategy reduced the error rate of UDPS to an average of 0.05%, which is lower than previously reported. Consequently, the cut-off for detection of resistance mutations was very low. A median of 16,016 (range 2,406–35,401) sequence reads were obtained per sample, which allowed detection and quantification of minorit
Cross-clade simultaneous HIV drug resistance genotyping for reverse transcriptase, protease, and integrase inhibitor mutations by Illumina MiSeq
Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making
<div><p>Behavioural and neurophysiological studies in primates have increasingly shown the involvement of urgency signals during the temporal integration of sensory evidence in perceptual decision-making. Neuronal correlates of such signals have been found in the parietal cortex, and in separate studies, demonstrated attention-induced gain modulation of both excitatory and inhibitory neurons. Although previous computational models of decision-making have incorporated gain modulation, their abstract forms do not permit an understanding of the contribution of inhibitory gain modulation. Thus, the effects of co-modulating both excitatory and inhibitory neuronal gains on decision-making dynamics and behavioural performance remain unclear. In this work, we incorporate time-dependent co-modulation of the gains of both excitatory and inhibitory neurons into our previous biologically based decision circuit model. We base our computational study in the context of two classic motion-discrimination tasks performed in animals. Our model shows that by simultaneously increasing the gains of both excitatory and inhibitory neurons, a variety of the observed dynamic neuronal firing activities can be replicated. In particular, the model can exhibit winner-take-all decision-making behaviour with higher firing rates and within a significantly more robust model parameter range. It also exhibits short-tailed reaction time distributions even when operating near a dynamical bifurcation point. The model further shows that neuronal gain modulation can compensate for weaker recurrent excitation in a decision neural circuit, and support decision formation and storage. Higher neuronal gain is also suggested in the more cognitively demanding reaction time than in the fixed delay version of the task. Using the exact temporal delays from the animal experiments, fast recruitment of gain co-modulation is shown to maximize reward rate, with a timescale that is surprisingly near the experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.</p></div
Of monkeys and men:Impatience in perceptual decision-making
For decades sequential sampling models have successfully accounted for human and monkey decision-making, relying on the standard assumption that decision makers maintain a pre-set decision standard throughout the decision process. Based on the theoretical argument of reward rate maximization, some authors have recently suggested that decision makers become increasingly impatient as time passes and therefore lower their decision standard. Indeed, a number of studies show that computational models with an impatience component provide a good fit to human and monkey decision behavior. However, many of these studies lack quantitative model comparisons and systematic manipulations of rewards. Moreover, the often-cited evidence from single-cell recordings is not unequivocal and complimentary data from human subjects is largely missing. We conclude that, despite some enthusiastic calls for the abandonment of the standard model, the idea of an impatience component has yet to be fully established; we suggest a number of recently developed tools that will help bring the debate to a conclusive settlement
The effect of metabolic stress on genome stability of a synthetic biology chassis Escherichia coli K12 strain
Accuracy and response-time distributions for decision-making: linear perfect integrators versus nonlinear attractor-based neural circuits
Meditation and cognitive ageing: The role of mindfulness meditation in building cognitive reserve
Mindfulness-related meditation practices engage various cognitive skills including the ability to focus and sustain attention, which in itself requires several interacting attentional sub-functions. There is increasing behavioural and neuroscientific evidence that mindfulness meditation improves these functions and associated neural processes. More so than other cognitive training programmes, the effects of meditation appear to generalise to other cognitive tasks, thus demonstrating far transfer effects. As these attentional functions have been linked to age-related cognitive decline, there is growing interest in the question whether meditation can slow-down or even prevent such decline. The cognitive reserve hypothesis builds on evidence that various lifestyle factors can lead to better cognitive performance in older age than would be predicted by the existing degree of brain pathology. We argue that mindfulness meditation, as a combination of brain network and brain state training, may increase cognitive reserve capacity and may mitigate age-related declines in cognitive functions. We consider available direct and indirect evidence from the perspective of cognitive reserve theory. The limited available evidence suggests that MM may enhance cognitive reserve capacity directly through the repeated activation of attentional functions and of the multiple demand system and indirectly through the improvement of physiological mechanisms associated with stress and immune function. The article concludes with outlining research strategies for addressing underlying empirical questions in more substantial ways
An early and enduring advanced technology originating 71,000 years ago in South Africa
There is consensus that the modern human lineage appeared in Africa before 100,000 years ago. But there is debate as to when cultural and cognitive characteristics typical of modern humans first appeared, and the role that these had in the expansion of modern humans out of Africa. Scientists rely on symbolically specific proxies, such as artistic expression, to document the origins of complex cognition. Advanced technologies with elaborate chains of production are also proxies, as these often demand high-fidelity transmission and thus language. Some argue that advanced technologies in Africa appear and disappear and thus do not indicate complex cognition exclusive to early modern humans in Africa. The origins of composite tools and advanced projectile weapons figure prominently in modern human evolution research, and the latter have been argued to have been in the exclusive possession of modern humans. Here we describe a previously unrecognized advanced stone tool technology from Pinnacle Point Site 5-6 on the south coast of South Africa, originating approximately 71,000 years ago. This technology is dominated by the production of small bladelets (microliths) primarily from heat-treated stone. There is agreement that microlithic technology was used to create composite tool components as part of advanced projectile weapons. Microliths were common worldwide by the mid-Holocene epoch, but have a patchy pattern of first appearance that is rarely earlier than 40,000 years ago, and were thought to appear briefly between 65,000 and 60,000 years ago in South Africa and then disappear. Our research extends this record to ∼71,000 years, shows that microlithic technology originated early in South Africa, evolved over a vast time span (∼11,000 years), and was typically coupled to complex heat treatment that persisted for nearly 100,000 years. Advanced technologies in Africa were early and enduring; a small sample of excavated sites in Africa is the best explanation for any perceived 'flickering' pattern
- …
