9 research outputs found

    Hemodynamic responses to emotional speech in two-month-old infants imaged using diffuse optical tomography

    Get PDF
    Emotional speech is one of the principal forms of social communication in humans. In this study, we investigated neural processing of emotional speech (happy, angry, sad and neutral) in the left hemisphere of 21 two-month-old infants using diffuse optical tomography. Reconstructed total hemoglobin (HbT) images were analysed using adaptive voxel-based clustering and region-of-interest (ROI) analysis. We found a distributed happy > neutral response within the temporo-parietal cortex, peaking in the anterior temporal cortex; a negative HbT response to emotional speech (the average of the emotional speech conditions angry in the anterior superior temporal sulcus (STS), happy > angry in the superior temporal gyrus and posterior superior temporal sulcus, angry <baseline in the insula, superior temporal sulcus and superior temporal gyrus and happy <baseline in the anterior insula. These results suggest that left STS is more sensitive to happy speech as compared to angry speech, indicating that it might play an important role in processing positive emotions in two-month-old infants. Furthermore, happy speech (relative to neutral) seems to elicit more activation in the temporo-parietal cortex, thereby suggesting enhanced sensitivity of temporo-parietal cortex to positive emotional stimuli at this stage of infant development.Peer reviewe

    Hemodynamic responses to emotional speech in two-month-old infants imaged using diffuse optical tomography

    Get PDF
    Emotional speech is one of the principal forms of social communication in humans. In this study, we investigated neural processing of emotionalspeech (happy, angry, sad and neutral) in the left hemisphere of 21 two-month-old infants using diffuse optical tomography. Reconstructed total hemoglobin (HbT) images were analysed using adaptive voxel-based clustering and region-of-interest (ROI) analysis. We found a distributedhappy > neutral response within the temporo-parietal cortex, peakingin the anterior temporal cortex; a negative HbT response to emotional speech (the average of the emotional speech conditions angry in the anterior superior temporal sulcus (STS), happy > angry in the superior temporal gyrus and posterior superior temporal sulcus, angry </p

    Assessing the impact of emotion in dual pathway models of sensory processing.

    Get PDF
    In our daily environment, we are constantly encountering an endless stream of information which we must be able to sort and prioritize. Some of the features that influence this are the emotional nature of stimuli and the emotional context of events. Emotional information is often given preferential access to neurocognitive resources, including within sensory processing systems. Interestingly, both auditory and visual systems are divided into dual processing streams; a ventral object identity/perception stream and a dorsal object location/action stream. While effects of emotion on the ventral streams are relatively well defined, its effect on dorsal stream processes remains unclear. The present thesis aimed to investigate the impact of emotion on sensory systems within a dual pathway framework of sensory processing. Study I investigated the role of emotion during auditory localization. While undergoing fMRI, participants indicated the location of an emotional or non-emotional sound within an auditory virtual environment. This revealed that the neurocognitive structures displaying activation modulated by emotion were not the same as those modulated by sound location. Emotion was represented in regions associated with the putative auditory ‘what’ but not ‘where’ stream. Study II examined the impact of emotion on ostensibly similar localization behaviours mediated differentially by the dorsal versus ventral visual processing stream. Ventrally-mediated behaviours were demonstrated to be impacted by the emotional context of a trial, while dorsally-mediated behaviours were not. For Study III, a motion-aftereffect paradigm was used to investigate the impact of emotion on visual area V5/MT+. This area, traditionally believed to be involved in dorsal stream processing, has a number of characteristics similar to a ventral stream structure. It was discovered that V5/MT+ activity was modulated both by presence of perceptual motion and emotional content of an image. In addition, this region displayed patterns of functional connectivity with the amygdala that were significantly modulated by emotion. Together, these results suggest that emotional information modulates neural processing within ventral sensory processing streams, but not dorsal processing streams. These findings are discussed with respect to current models of emotional and sensory processing, including amygdala connections to sensory cortices and emotional effects on cognition and behaviour

    IMAGING EMOTIONAL SOUNDS PROCESSING AT 7T

    Get PDF
    Emotional sounds and their localization are influential stimuli that we need to process all along our life. Affective information contained in sounds is primordial for the human social communications and interactions. Their accurate localization is important for the identification and reaction to environmental events. This thesis investigate the encoding of emotional sounds within auditory areas and the amygdala (AMY) using 7 Tesla fMRI. In a first experiment, we studied the encoding of emotion and vocalization and their integration in early-stage auditory areas, the voice area (VA) and the AMY. We described that the response of the early-stage auditory areas was modulated by the vocalization and by the affective content of the sounds, and that this affective modulation is independent of the category of sounds. In contrast, AMY process only the emotional part, while VA is responsible for the processing of the emotional valence specifically for the human vocalization (HV) categories. Finally, we described a functional correlation between VA and AMY in the right hemisphere for the positive vocalizations only. In a second experiment, we investigated how the spatial origin of an emotional sound (HV or non- vocalizations) modulated its processing within early-stage auditory areas and VA. We highlighted a left hemispace preference for the positive vocalizations encoded bilaterally in the primary auditory cortex (PAC). Moreover, comparison with the first study indicated that the saliency of emotional valence could be increased by spatial cues, but that the encoding of vocalization is not impacted by the spatial context. Finally, we examined the functional correlations between early-stage auditory areas and VA and how they are modulated by the sound category, the valence and the lateralization. We documented a strong coupling between VA and early-stage auditory areas during the presentation of emotional HV, but not for other environmental sounds. The category of sound modulated strongly the functional correlations between VA, PAC and auditory belt areas, while the spatial positioning induced only a weak modulation and no modulation was caused by the affective content. Overall, these studies demonstrate that the affective load modulates the processing of sounds within VA only for HV, and that this preference for vocalizations impacts the functional correlations of VA with other auditory regions. This strengthens the importance of VA as a computation hub for the processing of emotional vocalizations. -- Les sons émotionnels ainsi que leur localisation sont des stimuli importants que nous devons traiter tout au long de notre vie. L’information affective contenue dans les sons est primordiale pour les communications et interactions sociales. Leur localisation correcte est importante pour l’identification et la réaction par rapport aux événements nous entourant. Cette thèse étudie l’encodage des sons émotionnels dans les aires auditives et l’amygdale (AMY) en utilisant l’IRM fonctionnel à 7 Tesla. Dans une première expérience, nous avons étudié l’encodage des émotions et des vocalisations, ainsi que leur intégration dans les aires auditives primaires et non-primaires, dans l’aire des voix (VA) et dans AMY. Nous avons décrit que la réponse des aires auditives primaires et non-primaires étaient modulées par les vocalisations ainsi que par le contenu affectif des sons, et que cette modulation affective était indépendante de la catégorie sonore. En revanche, AMY traite uniquement la partie émotionnelle, tandis que la VA est responsable du traitement de la valence émotionnelle spécifiquement pour les vocalisations humaines (HV). Finalement, nous avons décrit une corrélation fonctionnelle entre VA et AMY dans l’hémisphère droit pour les vocalisations positives uniquement. Dans une seconde expérience, nous avons cherché à comprendre de quelle manière l’origine spatiale d’un son émotionnel (HV et non-vocalisations) modulait son traitement dans les aires auditives, primaires et non-primaires, et VA. Nous avons mis en évidence une préférence de l’hémi-champ gauche pour les vocalisations positive encodées bilatéralement dans le cortex auditif primaire (PAC). De plus, une comparaison avec la première étude a indiqué que l’importance de la valence émotionnelle pourrait être augmentée grâce aux indices spatiaux, mais que l’encodage des vocalisations n’étaient pas impacté par le contexte spatial. Finalement, nous avons examiné les corrélations fonctionnelles entre les aires auditives primaires, non-primaires et VA afin d’évaluer de quelle manière elles étaient modulées par la catégorie sonore, la valence et la latéralisation. Nous avons mis en évidence un fort couplage entre VA et les aires auditives primaires et non-primaires durant la présentation des HV émotionnelles, mais cet effet n’était pas présent pour les autres sons environnementaux. La catégorie sonore modulait fortement les corrélations fonctionnelles entre VA, PAC et les régions auditives latérales, alors que le positionnement spatial n’influençait que faiblement leur modulation. De plus, il n’y avait pas de modulation causée par le contenu affectif. En résumé, ces études démontrent que le contenu affectif module le traitement des sons dans VA uniquement pour les HV, et que cette préférence pour les vocalisations a un impact sur les corrélations fonctionnelles de cette région avec les autres régions auditives. Cela souligne l’importance de VA comme centre computationnel pour le traitement des vocalisations émotionnelles

    Emotional Experience, Paranoia, and Probabilistic Reasoning in Schizophrenia

    Full text link
    Schizophrenia (SZ) is a chronic mental disorder characterized by longstanding and severe social functioning deficits. In trying to better understand psychosocial factors that perpetuate these functional deficits, this dissertation included three studies that examine cognitive and affective factors with the potential to improve functional outcomes in SZ: 1) emotional experience, 2) paranoia, and 3) reasoning. Study one examined negative/positive affect and social functioning with self-report measures among SZ, affective disorders, and the general population. Study 2 assessed paranoia and its relationship with the interpretation of the environment via affective sound localization in SZ. Study 3 compared probabilistic reasoning when estimating the likely source of threatening and non-threatening affective stimuli while also examining the relationship between probabilistic reasoning and delusional thinking in SZ. The findings of this dissertation suggest that for people with schizophrenia: 1) treatment of heightened negative affect and reduced positive affect may improve social functioning, 2) paranoia may aid localization of natural sounds that occur in the environment, and 3) promoting more conservative probabilistic reasoning may help to reduce delusional thinking.PHDPsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146039/1/tylerg_1.pd

    Flexibly adapting to emotional cues: Examining the functional and structural correlates of emotional reactivity and emotion control in healthy and depressed individuals

    Get PDF
    The ability of emotionally significant stimuli to bias our behaviour is an evolutionarily adaptive phenomenon. However, sometimes emotions become excessive, inappropriate, and even pathological, like in major depressive disorder (MDD). Emotional flexibility includes both the neural processes involved in reacting to, or representing, emotional significance, and those involved in controlling emotional reactivity. MDD represents a potentially distinct form of emotion (in)flexibility, and therefore offers a unique perspective for understanding both the integration of conflicting emotional cues and the neural regions involved in actively controlling emotional systems. The present investigation of emotional flexibility began by considering the functional neural correlates of competing socio-emotional cues and effortful emotion regulation in MDD using both negative and positive emotions. Study 1 revealed greater amygdala activity in MDD relative to control participants when negative cues were centrally presented and task-relevant. No significant between-group differences were observed in the amygdala for peripheral task-irrelevant negative distracters. However, controls demonstrated greater recruitment of the ventrolateral (vlPFC) and dorsomedial prefrontal cortices (dmPFC) implicated in emotion control. Conversely, attenuated amygdala activity for task-relevant and irrelevant positive cues was observed in depressed participants. In Study 2, effortful emotion regulation using strategies adapted from cognitive behaviour therapy (CBT) revealed greater activity in regions of the dorsal and lateral prefrontal cortices in both MDD and control participants when attempting to either down-regulate negative or up-regulate positive emotions. During the down-regulation of negative cues, only controls displayed a significant reduction of amygdala activity. In Study 3, an individual differences approach using multiple regression revealed that while greater amygdala-vmPFC structural connectivity was associated with low trait-anxiety, greater connectivity between amygdala and regions of occipitotemporal and parietal cortices was associated with high trait-anxiety. These findings are discussed with respect to current models of emotional reactivity and emotion control derived from studies of both healthy individuals and those with emotional disorders, particularly depression. The focus is on amygdala variability in differing contexts, the role of the vmPFC in the modulation of amygdala activity via learning processes, and the modulation of emotion by attention or cognitive control mechanisms initiated by regions of frontoparietal cortices

    Characterizing the Familiar-Voice Benefit to Intelligibility

    Get PDF
    Everyday listening often occurs in the presence of background noise. Listeners with normal hearing can often successfully segregate competing sounds from the signal of interest. To do this, listeners exploit a variety of cues to facilitate the separation of simultaneous sounds into separate sources, and group sequential sounds into intelligible speech streams. One of the cues that has been shown to be an effective facilitator of speech intelligibility is familiarity with a talker’s voice. A recent study by Johnsrude et al. (2013) measured speech intelligibility of a naturally familiar voice (i.e., that of a long-term spouse) and showed a large improvement in intelligibility when a spouse’s voice serves as the target or the masker. This improvement is commensurate with another cue that is well-understood to be a strong facilitator of intelligibility: spatially separating two speech streams. Therefore, the goal of this thesis is to extend the work of Johnsrude et al. (2013) by providing a clearer understanding of voice familiarity as a cue for improving intelligibility. Specifically, the aims of this thesis are 1) to measure the magnitude of intelligibility benefit of different types of naturally familiar voices: friends and spouses, (2) to quantify the familiar-voice benefit in terms of degrees of spatial separation, and (3) to compare the neural bases of voice familiarity and spatial release from masking to determine if these cues improve intelligibility by recruiting similar areas of the brain. The primary findings of this thesis were that 1) the familiar-voice benefit of friends and spouses are comparable to each other and that relationship duration does not affect the magnitude of the familiar-voice benefit, (2) that participants gain a similar benefit from a familiar target as when an unfamiliar voice is separated from two symmetrical maskers by approximately 15° azimuth, and (3) that familiar voices and spatial release from masking both activate known temporal voice areas, but attending to an unfamiliar target voice when masked by a familiar voice also recruits attention areas. Taken together, this thesis illustrates the effectiveness of a naturally familiar target voice in improving intelligibility
    corecore