4 research outputs found

    Intrahemispheric cortico-cortical connections of the human auditory cortex.

    Get PDF
    The human auditory cortex comprises the supratemporal plane and large parts of the temporal and parietal convexities. We have investigated the relevant intrahemispheric cortico-cortical connections using in vivo DSI tractography combined with landmark-based registration, automatic cortical parcellation and whole-brain structural connection matrices in 20 right-handed male subjects. On the supratemporal plane, the pattern of connectivity was related to the architectonically defined early-stage auditory areas. It revealed a three-tier architecture characterized by a cascade of connections from the primary auditory cortex to six adjacent non-primary areas and from there to the superior temporal gyrus. Graph theory-driven analysis confirmed the cascade-like connectivity pattern and demonstrated a strong degree of segregation and hierarchy within early-stage auditory areas. Putative higher-order areas on the temporal and parietal convexities had more widely spread local connectivity and long-range connections with the prefrontal cortex; analysis of optimal community structure revealed five distinct modules in each hemisphere. The pattern of temporo-parieto-frontal connectivity was partially asymmetrical. In conclusion, the human early-stage auditory cortical connectivity, as revealed by in vivo DSI tractography, has strong similarities with that of non-human primates. The modular architecture and hemispheric asymmetry in higher-order regions is compatible with segregated processing streams and lateralization of cognitive functions

    Local landmark-based registration for fMRI group studies of nonprimary auditory cortex.

    Get PDF
    Interindividual functional and structural brain variability is a major problem in group studies, in which very focal activations are expected. Architectonic studies have shown that the human primary auditory area, which is located with a great constancy on Heschl's gyrus, is surrounded by several nonprimary auditory areas with surface areas of 40-310 mm(2). The small size of the latter makes them only partially accessible to fMRI group studies, because of imprecision in realignment when using currently available registration procedures. We describe here a new method for sulcal realignment using a non-rigid local landmark-based registration and show its application to the registration of fMRI acquisitions on the supratemporal plane. After an affine global voxel-based registration, which transforms all brains into the same standard space, we propose a non-rigid local landmark-based registration method based on thin-plate splines for matching the two sulci delimiting Heschl's gyrus of a given brain to the corresponding sulci of a reference brain. We show here that, in comparison with global affine and non-rigid approaches, our method leads in group studies to i) a much more precise alignment of Heschl's gyrus; and ii) a putatively optimal superposition of functionally corresponding areas on and around Heschl's gyrus

    IMAGING EMOTIONAL SOUNDS PROCESSING AT 7T

    Get PDF
    Emotional sounds and their localization are influential stimuli that we need to process all along our life. Affective information contained in sounds is primordial for the human social communications and interactions. Their accurate localization is important for the identification and reaction to environmental events. This thesis investigate the encoding of emotional sounds within auditory areas and the amygdala (AMY) using 7 Tesla fMRI. In a first experiment, we studied the encoding of emotion and vocalization and their integration in early-stage auditory areas, the voice area (VA) and the AMY. We described that the response of the early-stage auditory areas was modulated by the vocalization and by the affective content of the sounds, and that this affective modulation is independent of the category of sounds. In contrast, AMY process only the emotional part, while VA is responsible for the processing of the emotional valence specifically for the human vocalization (HV) categories. Finally, we described a functional correlation between VA and AMY in the right hemisphere for the positive vocalizations only. In a second experiment, we investigated how the spatial origin of an emotional sound (HV or non- vocalizations) modulated its processing within early-stage auditory areas and VA. We highlighted a left hemispace preference for the positive vocalizations encoded bilaterally in the primary auditory cortex (PAC). Moreover, comparison with the first study indicated that the saliency of emotional valence could be increased by spatial cues, but that the encoding of vocalization is not impacted by the spatial context. Finally, we examined the functional correlations between early-stage auditory areas and VA and how they are modulated by the sound category, the valence and the lateralization. We documented a strong coupling between VA and early-stage auditory areas during the presentation of emotional HV, but not for other environmental sounds. The category of sound modulated strongly the functional correlations between VA, PAC and auditory belt areas, while the spatial positioning induced only a weak modulation and no modulation was caused by the affective content. Overall, these studies demonstrate that the affective load modulates the processing of sounds within VA only for HV, and that this preference for vocalizations impacts the functional correlations of VA with other auditory regions. This strengthens the importance of VA as a computation hub for the processing of emotional vocalizations. -- Les sons émotionnels ainsi que leur localisation sont des stimuli importants que nous devons traiter tout au long de notre vie. L’information affective contenue dans les sons est primordiale pour les communications et interactions sociales. Leur localisation correcte est importante pour l’identification et la réaction par rapport aux événements nous entourant. Cette thèse étudie l’encodage des sons émotionnels dans les aires auditives et l’amygdale (AMY) en utilisant l’IRM fonctionnel à 7 Tesla. Dans une première expérience, nous avons étudié l’encodage des émotions et des vocalisations, ainsi que leur intégration dans les aires auditives primaires et non-primaires, dans l’aire des voix (VA) et dans AMY. Nous avons décrit que la réponse des aires auditives primaires et non-primaires étaient modulées par les vocalisations ainsi que par le contenu affectif des sons, et que cette modulation affective était indépendante de la catégorie sonore. En revanche, AMY traite uniquement la partie émotionnelle, tandis que la VA est responsable du traitement de la valence émotionnelle spécifiquement pour les vocalisations humaines (HV). Finalement, nous avons décrit une corrélation fonctionnelle entre VA et AMY dans l’hémisphère droit pour les vocalisations positives uniquement. Dans une seconde expérience, nous avons cherché à comprendre de quelle manière l’origine spatiale d’un son émotionnel (HV et non-vocalisations) modulait son traitement dans les aires auditives, primaires et non-primaires, et VA. Nous avons mis en évidence une préférence de l’hémi-champ gauche pour les vocalisations positive encodées bilatéralement dans le cortex auditif primaire (PAC). De plus, une comparaison avec la première étude a indiqué que l’importance de la valence émotionnelle pourrait être augmentée grâce aux indices spatiaux, mais que l’encodage des vocalisations n’étaient pas impacté par le contexte spatial. Finalement, nous avons examiné les corrélations fonctionnelles entre les aires auditives primaires, non-primaires et VA afin d’évaluer de quelle manière elles étaient modulées par la catégorie sonore, la valence et la latéralisation. Nous avons mis en évidence un fort couplage entre VA et les aires auditives primaires et non-primaires durant la présentation des HV émotionnelles, mais cet effet n’était pas présent pour les autres sons environnementaux. La catégorie sonore modulait fortement les corrélations fonctionnelles entre VA, PAC et les régions auditives latérales, alors que le positionnement spatial n’influençait que faiblement leur modulation. De plus, il n’y avait pas de modulation causée par le contenu affectif. En résumé, ces études démontrent que le contenu affectif module le traitement des sons dans VA uniquement pour les HV, et que cette préférence pour les vocalisations a un impact sur les corrélations fonctionnelles de cette région avec les autres régions auditives. Cela souligne l’importance de VA comme centre computationnel pour le traitement des vocalisations émotionnelles
    corecore