66 research outputs found

    How input modality and visual experience affect the representation of categories in the brain

    Get PDF
    The general aim of the present dissertation was to participate in the progress of our understanding of how sensory input and sensory experience impact on how the human brain implements categorical knowledge. The goal was twofold: (1) understand whether there are brain regions that encode information about different categories regardless of input modality and sensory experience (study 1); (2) deepen the investigation of the mechanisms that drive cross-modal and intra-modal plasticity following early blindness and the way they express during the processing of different categories presented as real-world sounds (study 2). To address these fundamental questions, we used fMRI to characterize the brain responses to different conceptual categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. In study 1, we observed that the right posterior middle temporal gyrus (rpMTG) is the region that most reliably decoded categories and selectively correlated with conceptual models of our stimuli space independently of input modality and visual experience. However, this region maintains separate the representational format from the different modalities, revealing a multimodal rather than an amodal nature. In addition, we observed that VOTC showed distinct functional profiles according to the hemispheric side. The left VOTC showed an involvement in the acoustical categorization processing at the same degree in sighted and in blind individuals. We propose that this involvement might reflect an engagement of the left VOTC in more semantic/linguistic processing of the stimuli potentially supported by its enhanced connection with the language system. However, paralleling our observation in rpMTG, the representations from different modalities are maintained segregated in VOTC, showing little evidence for sensory-abstraction. On the other side, the right VOTC emerged as a sensory-related visual region in sighted with the ability to rewires itself toward acoustical stimulation in case of early visual deprivation. In study 2, we observed opposite effects of early visual deprivation on auditory decoding in occipital and temporal regions. While occipital regions contained more information about sound categories in the blind, the temporal cortex showed higher decoding in the sighted. This unbalance effect was stronger in the right hemisphere where we, also, observed a negative correlation between occipital and temporal decoding of sound categories in EB. These last results suggest that the intramodal and crossmodal reorganizations might be inter-connected. We therefore propose that the extension of non-visual functions in the occipital cortex of EB may trigger a network-level reorganization that reduce the computational load of the regions typically coding for the remaining senses due to the extension of such computation in occipital regions

    Structural and Functional Network-Level Reorganization in the Coding of Auditory Motion Directions and Sound Source Locations in the Absence of Vision

    Get PDF
    Epub 2022 May 2hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.The project was funded in part by a European Research Council starting grant MADVIS (Project 337573) awarded to O.C., the Belgian Excellence of Science (EOS) program (Project 30991544) awarded to O.C., a Flagship ERA-NET grant SoundSight (FRS-FNRS PINT-MULTI R.8008.19) awarded to O.C., and by the European Union Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 701250 awarded to V.O. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region. A.G.-A. is supported by the Wallonie Bruxelles International Excellence Fellowship and the FSR Incoming PostDoc Fellowship by Université Catholique de Louvain. O.C. is a research associate, C.B. is postdoctoral researcher, and M.R. is a research fellow at the Fond National de la Recherche Scientifique de Belgique (FRS-FNRS)

    The effects of verbal cueing on implicit hand maps

    Get PDF
    The use of position sense to perceive the external spatial location of the body requires that immediate proprioceptive afferent signals be combined with stored representations of body size and shape. Longo and Haggard (2010) developed a method to isolate and measure this representation in which participants judge the location of several landmarks on their occluded hand. The relative location of judgments is used to construct a perceptual map of hand shape. Studies using this paradigm have revealed large, and highly stereotyped, distortions of the hand, which is represented as wider than it actually is and with shortened fingers. Previous studies using this paradigm have cued participants to respond by giving verbal labels of the knuckles and fingertips. A recent study has shown differential effects of verbal and tactile cueing of localisation judgments about bodily landmarks (Cardinali et al., 2011). The present study therefore investigated implicit hand maps measuring through localisation judgments made in response to verbal labels and tactile stimuli applied to the same landmarks. The characteristic set of distortions of hand size and shape were clearly apparent in both conditions, indicating that the distortions reported previously are not an artefact of the use of verbal cues. However, there were also differences in the magnitude of distortions between conditions, suggesting that the use of verbal cues may alter the representation of the body underlying position sense

    Accuracy MVP classification - 8way decoding

    No full text

    Correlation between OCC and TEMP withinSUB

    No full text

    -Plot Figure 5A - Models

    No full text

    VOTC

    No full text

    Winner Takes All maps (Appendix 1)

    No full text

    STIMULI

    No full text

    RSA - external models

    No full text
    • …
    corecore