137 research outputs found

    Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal Cortex

    Get PDF
    Signals representing the value assigned to stimuli at the time of choice have been repeatedly observed in ventromedial prefrontal cortex (vmPFC). Yet it remains unknown how these value representations are computed from sensory and memory representations in more posterior brain regions. We used electroencephalography (EEG) while subjects evaluated appetitive and aversive food items to study how event-related responses modulated by stimulus value evolve over time. We found that value-related activity shifted from posterior to anterior, and from parietal to central to frontal sensors, across three major time windows after stimulus onset: 150–250 ms, 400–550 ms, and 700–800 ms. Exploratory localization of the EEG signal revealed a shifting network of activity moving from sensory and memory structures to areas associated with value coding, with stimulus value activity localized to vmPFC only from 400 ms onwards. Consistent with these results, functional connectivity analyses also showed a causal flow of information from temporal cortex to vmPFC. Thus, although value signals are present as early as 150 ms after stimulus onset, the value signals in vmPFC appear relatively late in the choice process, and seem to reflect the integration of incoming information from sensory and memory related regions

    Eight weddings and six funerals: An fMRI study on autobiographical memories

    Get PDF
    \u201cAutobiographical memory\u201d (AM) refers to remote memories from one's own life. Previous neuroimaging studies have highlighted that voluntary retrieval processes from AM involve different forms of memory and cognitive functions. Thus, a complex and widespread brain functional network has been found to support AM. The present functional magnetic resonance imaging (fMRI) study used a multivariate approach to determine whether neural activity within the AM circuit would recognize memories of real autobiographical events, and to evaluate individual differences in the recruitment of this network. Fourteen right-handed females took part in the study. During scanning, subjects were presented with sentences representing a detail of a highly emotional real event (positive or negative) and were asked to indicate whether the sentence described something that had or had not really happened to them. Group analysis showed a set of cortical areas able to discriminate the truthfulness of the recalled events: medial prefrontal cortex, posterior cingulate/retrosplenial cortex, precuneus, bilateral angular, superior frontal gyri, and early visual cortical areas. Single-subject results showed that the decoding occurred at different time points. No differences were found between recalling a positive or a negative event. Our results show that the entire AM network is engaged in monitoring the veracity of AMs. This process is not affected by the emotional valence of the experience but rather by individual differences in cognitive strategies used to retrieve AMs

    Hemodynamic Traveling Waves in Human Visual Cortex

    Get PDF
    Functional MRI (fMRI) experiments rely on precise characterization of the blood oxygen level dependent (BOLD) signal. As the spatial resolution of fMRI reaches the sub-millimeter range, the need for quantitative modelling of spatiotemporal properties of this hemodynamic signal has become pressing. Here, we find that a detailed physiologically-based model of spatiotemporal BOLD responses predicts traveling waves with velocities and spatial ranges in empirically observable ranges. Two measurable parameters, related to physiology, characterize these waves: wave velocity and damping rate. To test these predictions, high-resolution fMRI data are acquired from subjects viewing discrete visual stimuli. Predictions and experiment show strong agreement, in particular confirming BOLD waves propagating for at least 5–10 mm across the cortical surface at speeds of 2–12 mm s-1. These observations enable fundamentally new approaches to fMRI analysis, crucial for fMRI data acquired at high spatial resolution

    Searchlight-based multi-voxel pattern analysis of fMRI by cross-validated MANOVA

    Get PDF
    Multi-voxel pattern analysis (MVPA) is a fruitful and increasingly popular complement to traditional univariate methods of analyzing neuroimaging data. We propose to replace the standard ‘decoding’ approach to searchlight-based MVPA, measuring the performance of a classifier by its accuracy, with a method based on the multivariate form of the general linear model. Following the well-established methodology of multivariate analysis of variance (MANOVA), we define a measure that directly characterizes the structure of multi-voxel data, the pattern distinctness D. Our measure is related to standard multivariate statistics, but we apply cross-validation to obtain an unbiased estimate of its population value, independent of the amount of data or its partitioning into ‘training’ and ‘test’ sets. The estimate can therefore serve not only as a test statistic, but also as an interpretable measure of multivariate effect size. The pattern distinctness generalizes the Mahalanobis distance to an arbitrary number of classes, but also the case where there are no classes of trials because the design is described by parametric regressors. It is defined for arbitrary estimable contrasts, including main effects (pattern differences) and interactions (pattern changes). In this way, our approach makes the full analytical power of complex factorial designs known from univariate fMRI analyses available to MVPA studies. Moreover, we show how the results of a factorial analysis can be used to obtain a measure of pattern stability, the equivalent of ‘cross-decoding’

    Developmental refinement of cortical systems for speech and voice processing

    Get PDF
    Development typically leads to optimized and adaptive neural mechanisms for the processing of voice and speech. In this fMRI study we investigated how this adaptive processing reaches its mature efficiency by examining the effects of task, age and phonological skills on cortical responses to voice and speech in children (8-9years), adolescents (14-15years) and adults. Participants listened to vowels (/a/, /i/, /u/) spoken by different speakers (boy, girl, man) and performed delayed-match-to-sample tasks on vowel and speaker identity. Across age groups, similar behavioral accuracy and comparable sound evoked auditory cortical fMRI responses were observed. Analysis of task-related modulations indicated a developmental enhancement of responses in the (right) superior temporal cortex during the processing of speaker information. This effect was most evident through an analysis based on individually determined voice sensitive regions. Analysis of age effects indicated that the recruitment of regions in the temporal-parietal cortex and posterior cingulate/cingulate gyrus decreased with development. Beyond age-related changes, the strength of speech-evoked activity in left posterior and right middle superior temporal regions significantly scaled with individual differences in phonological skills. Together, these findings suggest a prolonged development of the cortical functional network for speech and voice processing. This development includes a progressive refinement of the neural mechanisms for the selection and analysis of auditory information relevant to the ongoing behavioral task

    Decoding the Real-Time Neurobiological Properties of Incremental Semantic Interpretation.

    Get PDF
    Communication through spoken language is a central human capacity, involving a wide range of complex computations that incrementally interpret each word into meaningful sentences. However, surprisingly little is known about the spatiotemporal properties of the complex neurobiological systems that support these dynamic predictive and integrative computations. Here, we focus on prediction, a core incremental processing operation guiding the interpretation of each upcoming word with respect to its preceding context. To investigate the neurobiological basis of how semantic constraints change and evolve as each word in a sentence accumulates over time, in a spoken sentence comprehension study, we analyzed the multivariate patterns of neural activity recorded by source-localized electro/magnetoencephalography (EMEG), using computational models capturing semantic constraints derived from the prior context on each upcoming word. Our results provide insights into predictive operations subserved by different regions within a bi-hemispheric system, which over time generate, refine, and evaluate constraints on each word as it is heard.ERC Horizon202
    corecore