144 research outputs found

    Motor Preparatory Activity in Posterior Parietal Cortex is Modulated by Subjective Absolute Value

    Get PDF
    For optimal response selection, the consequences associated with behavioral success or failure must be appraised. To determine how monetary consequences influence the neural representations of motor preparation, human brain activity was scanned with fMRI while subjects performed a complex spatial visuomotor task. At the beginning of each trial, reward context cues indicated the potential gain and loss imposed for correct or incorrect trial completion. FMRI-activity in canonical reward structures reflected the expected value related to the context. In contrast, motor preparatory activity in posterior parietal and premotor cortex peaked in high “absolute value” (high gain or loss) conditions: being highest for large gains in subjects who believed they performed well while being highest for large losses in those who believed they performed poorly. These results suggest that the neural activity preceding goal-directed actions incorporates the absolute value of that action, predicated upon subjective, rather than objective, estimates of one's performance

    Would the field of cognitive neuroscience be advanced by sharing functional MRI data?

    Get PDF
    During the past two decades, the advent of functional magnetic resonance imaging (fMRI) has fundamentally changed our understanding of brain-behavior relationships. However, the data from any one study add only incrementally to the big picture. This fact raises important questions about the dominant practice of performing studies in isolation. To what extent are the findings from any single study reproducible? Are researchers who lack the resources to conduct a fMRI study being needlessly excluded? Is pre-existing fMRI data being used effectively to train new students in the field? Here, we will argue that greater sharing and synthesis of raw fMRI data among researchers would make the answers to all of these questions more favorable to scientific discovery than they are today and that such sharing is an important next step for advancing the field of cognitive neuroscience

    The control of attentional target selection in a colour/colour conjunction task

    Get PDF
    To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors, and became superadditive from about 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension, and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field

    Distribution of Attention Modulates Salience Signals in Early Visual Cortex

    Get PDF
    Previous research has shown that the extent to which people spread attention across the visual field plays a crucial role in visual selection and the occurrence of bottom-up driven attentional capture. Consistent with previous findings, we show that when attention was diffusely distributed across the visual field while searching for a shape singleton, an irrelevant salient color singleton captured attention. However, while using the very same displays and task, no capture was observed when observers initially focused their attention at the center of the display. Using event-related fMRI, we examined the modulation of retinotopic activity related to attentional capture in early visual areas. Because the sensory display characteristics were identical in both conditions, we were able to isolate the brain activity associated with exogenous attentional capture. The results show that spreading of attention leads to increased bottom-up exogenous capture and increased activity in visual area V3 but not in V2 and V1

    Electrophysiological Evidence for Spatiotemporal Flexibility in the Ventrolateral Attention Network

    Get PDF
    Successful completion of many everyday tasks depends on interactions between voluntary attention, which acts to maintain current goals, and reflexive attention, which enables responding to unexpected events by interrupting the current focus of attention. Past studies, which have mostly examined each attentional mechanism in isolation, indicate that volitional and reflexive orienting depend on two functionally specialized cortical networks in the human brain. Here we investigated how the interplay between these two cortical networks affects sensory processing and the resulting overt behavior. By combining measurements of human performance and electrocortical recordings with a novel analytical technique for estimating spatiotemporal activity in the human cortex, we found that the subregions that comprise the reflexive ventrolateral attention network dissociate both spatially and temporally as a function of the nature of the sensory information and current task demands. Moreover, we found that together with the magnitude of the early sensory gain, the spatiotemporal neural dynamics accounted for the high amount of the variance in the behavioral data. Collectively these data support the conclusion that the ventrolateral attention network is recruited flexibly to support complex behaviors

    Distinct Visual Working Memory Systems for View-Dependent and View-Invariant Representation

    Get PDF
    Background: How do people sustain a visual representation of the environment? Currently, many researchers argue that a single visual working memory system sustains non-spatial object information such as colors and shapes. However, previous studies tested visual working memory for two-dimensional objects only. In consequence, the nature of visual working memory for three-dimensional (3D) object representation remains unknown. Methodology/Principal Findings: Here, I show that when sustaining information about 3D objects, visual working memory clearly divides into two separate, specialized memory systems, rather than one system, as was previously thought. One memory system gradually accumulates sensory information, forming an increasingly precise view-dependent representation of the scene over the course of several seconds. A second memory system sustains view-invariant representations of 3D objects. The view-dependent memory system has a storage capacity of 3–4 representations and the view-invariant memory system has a storage capacity of 1–2 representations. These systems can operate independently from one another and do not compete for working memory storage resources. Conclusions/Significance: These results provide evidence that visual working memory sustains object information in two separate, specialized memory systems. One memory system sustains view-dependent representations of the scene, akin to the view-specific representations that guide place recognition during navigation in humans, rodents and insects. Th

    Activation in a Frontoparietal Cortical Network Underlies Individual Differences in the Performance of an Embedded Figures Task

    Get PDF
    The Embedded Figures Test (EFT) requires observers to search for a simple geometric shape hidden inside a more complex figure. Surprisingly, performance in the EFT is negatively correlated with susceptibility to illusions of spatial orientation, such as the Roelofs effect. Using fMRI, we previously demonstrated that regions in parietal cortex are involved in the contextual processing associated with the Roelofs task. In the present study, we found that similar parietal regions (superior parietal cortex and precuneus) were more active during the EFT than during a simple matching task. Importantly, these parietal activations overlapped with regions found to be involved during contextual processing in the Roelofs illusion. Additional parietal and frontal areas, in the right hemisphere, showed strong correlations between brain activity and behavioral performance during the search task. We propose that the posterior parietal regions are necessary for processing contextual information across many different, but related visuospatial tasks, with additional parietal and frontal regions serving to coordinate this processing in participants proficient in the task

    Testing a dynamic field account of interactions between spatial attention and spatial working memory

    Get PDF
    Studies examining the relationship between spatial attention and spatial working memory (SWM) have shown that discrimination responses are faster for targets appearing at locations that are being maintained in SWM, and that location memory is impaired when attention is withdrawn during the delay. These observations support the proposal that sustained attention is required for successful retention in SWM: if attention is withdrawn, memory representations are likely to fail, increasing errors. In the present study, this proposal is reexamined in light of a neural process model of SWM. On the basis of the model’s functioning, we propose an alternative explanation for the observed decline in SWM performance when a secondary task is performed during retention: SWM representations drift systematically toward the location of targets appearing during the delay. To test this explanation, participants completed a color-discrimination task during the delay interval of a spatial recall task. In the critical shifting attention condition, the color stimulus could appear either toward or away from the memorized location relative to a midline reference axis. We hypothesized that if shifting attention during the delay leads to the failure of SWM representations, there should be an increase in the variance of recall errors but no change in directional error, regardless of the direction of the shift. Conversely, if shifting attention induces drift of SWM representations—as predicted by the model—there should be systematic changes in the pattern of spatial recall errors depending on the direction of the shift. Results were consistent with the latter possibility—recall errors were biased toward the location of discrimination targets appearing during the delay

    Neuro-cognitive mechanisms of conscious and unconscious visual perception: From a plethora of phenomena to general principles

    Get PDF
    Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research

    Generative Embedding for Model-Based Classification of fMRI Data

    Get PDF
    Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in ‘hidden’ physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups
    corecore