112 research outputs found

    Reappraising the voices of wrath

    Get PDF
    Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotion

    Specific Brain Networks during Explicit and Implicit Decoding of Emotional Prosody

    Get PDF
    To better define the underlying brain network for the decoding of emotional prosody, we recorded high-resolution brain scans during an implicit and explicit decoding task of angry and neutral prosody. Several subregions in the right superior temporal gyrus (STG) and bilateral in the inferior frontal gyrus (IFG) were sensitive to emotional prosody. Implicit processing of emotional prosody engaged regions in the posterior superior temporal gyrus (pSTG) and bilateral IFG subregions, whereas explicit processing relied more on mid STG, left IFG, amygdala, and subgenual anterior cingulate cortex. Furthermore, whereas some bilateral pSTG regions and the amygdala showed general sensitivity to prosody-specific acoustical features during implicit processing, activity in inferior frontal brain regions was insensitive to these features. Together, the data suggest a differentiated STG, IFG, and subcortical network of brain regions, which varies with the levels of processing and shows a higher specificity during explicit decoding of emotional prosod

    Acoustic and structural differences between musically portrayed subtypes of fear

    Full text link
    Fear is a frequently studied emotion category in music and emotion research. However, research in music theory suggests that music can convey finer-grained subtypes of fear, such as terror and anxiety. Previous research on musically expressed emotions has neglected to investigate subtypes of fearful emotions. This study seeks to fill this gap in the literature. To that end, 99 participants rated the emotional impression of short excerpts of horror film music predicted to convey terror and anxiety, respectively. Then, the excerpts that most effectively conveyed these target emotions were analyzed descriptively and acoustically to demonstrate the sonic differences between musically conveyed terror and anxiety. The results support the hypothesis that music conveys terror and anxiety with markedly different musical structures and acoustic features. Terrifying music has a brighter, rougher, harsher timbre, is musically denser, and may be faster and louder than anxious music. Anxious music has a greater degree of loudness variability. Both types of fearful music tend towards minor modalities and are rhythmically unpredictable. These findings further support the application of emotional granularity in music and emotion research

    Talking in Fury: The Cortico-Subcortical Network Underlying Angry Vocalizations

    Get PDF
    Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal B

    Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity

    Get PDF
    To study emotional reactions to music, it is important to consider the temporal dynamics of both affective responses and underlying brain activity. Here, we investigated emotions induced by music using functional magnetic resonance imaging (fMRI) with a data-driven approach based on intersubject correlations (ISC). This method allowed us to identify moments in the music that produced similar brain activity (i.e. synchrony) among listeners under relatively natural listening conditions. Continuous ratings of subjective pleasantness and arousal elicited by the music were also obtained for the music outside of the scanner. Our results reveal synchronous activations in left amygdala, left insula and right caudate nucleus that were associated with higher arousal, whereas positive valence ratings correlated with decreases in amygdala and caudate activity. Additional analyses showed that synchronous amygdala responses were driven by energy-related features in the music such as root mean square and dissonance, while synchrony in insula was additionally sensitive to acoustic event density. Intersubject synchrony also occurred in the left nucleus accumbens, a region critically implicated in reward processing. Our study demonstrates the feasibility and usefulness of an approach based on ISC to explore the temporal dynamics of music perception and emotion in naturalistic condition

    Functional neuroimaging of human vocalizations and affective speech

    Get PDF
    Neuroimaging studies have verified the important integrative role of the basal ganglia during affective vocalizations. They, however, also point to additional regions supporting vocal monitoring, auditory-motor feedback processing, and online adjustments of vocal motor responses. For the case of affective vocalizations, we suggest partly extending the model to fully consider the link between primate-general and human-specific neural component

    Psychopathic and autistic traits differentially influence the neural mechanisms of social cognition from communication signals

    Full text link
    Psychopathy is associated with severe deviations in social behavior and cognition. While previous research described such cognitive and neural alterations in the processing of rather specific social information from human expressions, some open questions remain concerning central and differential neurocognitive deficits underlying psychopathic behavior. Here we investigated three rather unexplored factors to explain these deficits, first, by assessing psychopathy subtypes in social cognition, second, by investigating the discrimination of social communication sounds (speech, non-speech) from other non-social sounds, and third, by determining the neural overlap in social cognition impairments with autistic traits, given potential common deficits in the processing of communicative voice signals. The study was exploratory with a focus on how psychopathic and autistic traits differentially influence the function of social cognitive and affective brain networks in response to social voice stimuli. We used a parametric data analysis approach from a sample of 113 participants (47 male, 66 female) with ages ranging between 18 and 40 years (mean 25.59, SD 4.79). Our data revealed four important findings. First, we found a phenotypical overlap between secondary but not primary psychopathy with autistic traits. Second, primary psychopathy showed various neural deficits in neural voice processing nodes (speech, non-speech voices) and in brain systems for social cognition (mirroring, mentalizing, empathy, emotional contagion). Primary psychopathy also showed deficits in the basal ganglia (BG) system that seems specific to the social decoding of communicative voice signals. Third, neural deviations in secondary psychopathy were restricted to social mirroring and mentalizing impairments, but with additional and so far undescribed deficits at the level of auditory sensory processing, potentially concerning deficits in ventral auditory stream mechanisms (auditory object identification). Fourth, high autistic traits also revealed neural deviations in sensory cortices, but rather in the dorsal auditory processing streams (communicative context encoding). Taken together, social cognition of voice signals shows considerable deviations in psychopathy, with differential and newly described deficits in the BG system in primary psychopathy and at the neural level of sensory processing in secondary psychopathy. These deficits seem especially triggered during the social cognition from vocal communication signals

    Affective speech modulates a cortico-limbic network in real time

    Full text link
    Affect signaling in human communication involves cortico-limbic brain systems for affect information decoding, such as expressed in vocal intonations during affective speech. Both, the affecto-acoustic speech profile of speakers and the cortico-limbic affect recognition network of listeners were previously identified using non-social and non-adaptive research protocols. However, these protocols neglected the inherent socio-dyadic nature of affective communication, thus underestimating the real-time adaptive dynamics of affective speech that maximize listeners' neural effects and affect recognition. To approximate this socio-adaptive and neural context of affective communication, we used an innovative real-time neuroimaging setup that linked speakers' live affective speech production with listeners' limbic brain signals that served as a proxy for affect recognition. We show that affective speech communication is acoustically more distinctive, adaptive, and individualized in a live adaptive setting and more efficiently capitalizes on neural affect decoding mechanisms in limbic and associated networks than non-adaptive affective speech communication. Only live affective speech produced in adaption to listeners' limbic signals was closely linked to their emotion recognition as quantified by speakers' acoustics and listeners' emotional rating correlations. Furthermore, while live and adaptive aggressive speaking directly modulated limbic activity in listeners, joyful speaking modulated limbic activity in connection with the ventral striatum that is, amongst others, involved in the processing of pleasure. Thus, evolved neural mechanisms for affect decoding seem largely optimized for interactive and individually adaptive communicative contexts

    A Randomized Controlled Trial Study of a Multimodal Intervention vs. Cognitive Training to Foster Cognitive and Affective Health in Older Adults.

    Get PDF
    Research over the past few decades has shown the positive influence that cognitive, social, and physical activities have on older adults' cognitive and affective health. Especially interventions in health-related behaviors, such as cognitive activation, physical activity, social activity, nutrition, mindfulness, and creativity, have shown to be particularly beneficial. Whereas most intervention studies apply unimodal interventions, such as cognitive training (CT), this study investigates the potential to foster cognitive and affective health factors of older adults by means of an autonomy-supportive multimodal intervention (MMI). The intervention integrates everyday life recommendations for six evidence-based areas combined with psychoeducational information. This randomized controlled trial study compares the effects of a MMI and CT on those of a waiting control group (WCG) on cognitive and affective factors, everyday life memory performance, and activity in everyday life. Three groups, including a total of 119 adults aged 65-86 years, attended a 5- or 10-week intervention. Specifically, one group completed a 10-week MMI, the second group completed 5-week of computer-based CT followed by a 5-week MMI, whereas the third group paused before completing the MMI for the last 5 weeks. All participants completed online surveys and cognitive tests at three test points. The findings showed an increase in the number and variability of activities in the everyday lives of all participants. Post hoc analysis on cognitive performance of MMI to CT indicate similar (classic memory and attention) or better (working memory) effects. Furthermore, results on far transfer variables showed interesting trends in favor of the MMI, such as increased well-being and attitude toward the aging brain. Also, the MMI group showed the biggest perceived improvements out of all groups for all self-reported personal variables (memory in everyday life and stress). The results implicate a positive trend toward MMI on cognitive and affective factors of older adults. These tendencies show the potential of a multimodal approach compared to training a specific cognitive function. Moreover, the findings suggest that information about MMI motivates participants to increase activity variability and frequency in everyday life. Finally, the results could also have implications for the primary prevention of neurocognitive deficits and degenerative diseases
    • …
    corecore