462 research outputs found

    Decoding visual object categories in early somatosensory cortex

    Get PDF
    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects

    Shared neural representations of tactile roughness intensities by somatosensation and touch observation using an associative learning method

    Get PDF
    Previous human fMRI studies have reported activation of somatosensory areas not only during actual touch, but also during touch observation. However, it has remained unclear how the brain encodes visually evoked tactile intensities. Using an associative learning method, we investigated neural representations of roughness intensities evoked by (a) tactile explorations and (b) visual observation of tactile explorations. Moreover, we explored (c) modality-independent neural representations of roughness intensities using a cross-modal classification method. Case (a) showed significant decoding performance in the anterior cingulate cortex (ACC) and the supramarginal gyrus (SMG), while in the case (b), the bilateral posterior parietal cortices, the inferior occipital gyrus, and the primary motor cortex were identified. Case (c) observed shared neural activity patterns in the bilateral insula, the SMG, and the ACC. Interestingly, the insular cortices were identified only from the cross-modal classification, suggesting their potential role in modality-independent tactile processing. We further examined correlations of confusion patterns between behavioral and neural similarity matrices for each region. Significant correlations were found solely in the SMG, reflecting a close relationship between neural activities of SMG and roughness intensity perception. The present findings may deepen our understanding of the brain mechanisms underlying intensity perception of tactile roughness

    On the geometric structure of fMRI searchlight-based information maps

    Full text link
    Information mapping is a popular application of Multivoxel Pattern Analysis (MVPA) to fMRI. Information maps are constructed using the so called searchlight method, where the spherical multivoxel neighborhood of every voxel (i.e., a searchlight) in the brain is evaluated for the presence of task-relevant response patterns. Despite their widespread use, information maps present several challenges for interpretation. One such challenge has to do with inferring the size and shape of a multivoxel pattern from its signature on the information map. To address this issue, we formally examined the geometric basis of this mapping relationship. Based on geometric considerations, we show how and why small patterns (i.e., having smaller spatial extents) can produce a larger signature on the information map as compared to large patterns, independent of the size of the searchlight radius. Furthermore, we show that the number of informative searchlights over the brain increase as a function of searchlight radius, even in the complete absence of any multivariate response patterns. These properties are unrelated to the statistical capabilities of the pattern-analysis algorithms used but are obligatory geometric properties arising from using the searchlight procedure.Comment: 15 pages, 7 figure

    Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses

    Get PDF
    Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.Tis work was supported by the Spanish Ministry of Economy and Competitiveness through the Juan de la Cierva Fellowship (FJCI-2015-26814), and the Ramon y Cajal Fellowship (RYC-2017- 21845), the Spanish State Research Agency through the BCBL “Severo Ochoa” excellence accreditation (SEV-2015-490), the Basque Government (BERC 2018- 2021) and the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant (No 799554).info:eu-repo/semantics/publishedVersio

    Frontoparietal representations of task context support the flexible control of goal-directed cognition.

    Get PDF
    Cognitive control allows stimulus-response processing to be aligned with internal goals and is thus central to intelligent, purposeful behavior. Control is thought to depend in part on the active representation of task information in prefrontal cortex (PFC), which provides a source of contextual bias on perception, decision making, and action. In the present study, we investigated the organization, influences, and consequences of context representation as human subjects performed a cued sorting task that required them to flexibly judge the relationship between pairs of multivalent stimuli. Using a connectivity-based parcellation of PFC and multivariate decoding analyses, we determined that context is specifically and transiently represented in a region spanning the inferior frontal sulcus during context-dependent decision making. We also found strong evidence that decision context is represented within the intraparietal sulcus, an area previously shown to be functionally networked with the inferior frontal sulcus at rest and during task performance. Rule-guided allocation of attention to different stimulus dimensions produced discriminable patterns of activation in visual cortex, providing a signature of top-down bias over perception. Furthermore, demands on cognitive control arising from the task structure modulated context representation, which was found to be strongest after a shift in task rules. When context representation in frontoparietal areas increased in strength, as measured by the discriminability of high-dimensional activation patterns, the bias on attended stimulus features was enhanced. These results provide novel evidence that illuminates the mechanisms by which humans flexibly guide behavior in complex environments

    Decoding information in the human hippocampus: a user's guide

    Get PDF
    Multi-voxel pattern analysis (MVPA), or 'decoding', of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded, solely from the pattern of fMRI activity, it must mean there is information about that stimulus represented in the brain region where the pattern across voxels was identified. This ability to examine the representation of information relating to specific stimuli (e.g., memories) in particular brain areas makes MVPA an especially suitable method for investigating memory representations in brain structures such as the hippocampus. This approach could open up new opportunities to examine hippocampal representations in terms of their content, and how they might change over time, with aging, and pathology. Here we consider published MVPA studies that specifically focused on the hippocampus, and use them to illustrate the kinds of novel questions that can be addressed using MVPA. We then discuss some of the conceptual and methodological challenges that can arise when implementing MVPA in this context. Overall, we hope to highlight the potential utility of MVPA, when appropriately deployed, and provide some initial guidance to those considering MVPA as a means to investigate the hippocampus

    Decoding natural sounds in early “visual” cortex of congenitally blind individuals

    Get PDF
    Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively, whether it is driven by mechanisms independent from visual experience. Here, we show that we can decode natural sounds from activity patterns in early “visual” areas of congenitally blind individuals who lack visual imagery. Thus, visual imagery is not a prerequisite of auditory feedback to early visual cortex. Furthermore, the spatial pattern of sound decoding accuracy in early visual cortex was remarkably similar in blind and sighted individuals, with an increasing decoding accuracy gradient from foveal to peripheral regions. This suggests that the typical organization by eccentricity of early visual cortex develops for auditory feedback, even in the lifelong absence of vision. The same feedback to early visual cortex might support visual perception in the sighted [1] and drive the recruitment of this area for non-visual functions in blind individuals [2, 3]

    Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain

    Get PDF
    Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus
    corecore