4 research outputs found

    Affective speech modulates a cortico-limbic network in real time

    Full text link
    Affect signaling in human communication involves cortico-limbic brain systems for affect information decoding, such as expressed in vocal intonations during affective speech. Both, the affecto-acoustic speech profile of speakers and the cortico-limbic affect recognition network of listeners were previously identified using non-social and non-adaptive research protocols. However, these protocols neglected the inherent socio-dyadic nature of affective communication, thus underestimating the real-time adaptive dynamics of affective speech that maximize listeners' neural effects and affect recognition. To approximate this socio-adaptive and neural context of affective communication, we used an innovative real-time neuroimaging setup that linked speakers' live affective speech production with listeners' limbic brain signals that served as a proxy for affect recognition. We show that affective speech communication is acoustically more distinctive, adaptive, and individualized in a live adaptive setting and more efficiently capitalizes on neural affect decoding mechanisms in limbic and associated networks than non-adaptive affective speech communication. Only live affective speech produced in adaption to listeners' limbic signals was closely linked to their emotion recognition as quantified by speakers' acoustics and listeners' emotional rating correlations. Furthermore, while live and adaptive aggressive speaking directly modulated limbic activity in listeners, joyful speaking modulated limbic activity in connection with the ventral striatum that is, amongst others, involved in the processing of pleasure. Thus, evolved neural mechanisms for affect decoding seem largely optimized for interactive and individually adaptive communicative contexts

    Neural competition between concurrent speech production and other speech perception

    Get PDF
    Understanding others’ speech while individuals simultaneously produce speech utterances implies neural competition and requires specific mechanisms for a neural resolution given that previous studies proposed opposing signal dynamics for both processes in the auditory cortex (AC). We here used neuroimaging in humans to investigate this neural competition by lateralized stimulations with other speech samples and ipsilateral or contralateral lateralized feedback of actively produced self speech utterances in the form of various speech vowels. In experiment 1, we show, first, that others’ speech classifications during active self speech lead to activity in the planum temporale (PTe) when both self and other speech samples were presented together to only the left or right ear. The contralateral PTe also seemed to indifferently respond to single self and other speech samples. Second, specific activity in the left anterior superior temporal cortex (STC) was found during dichotic stimulations (i.e. self and other speech presented to separate ears). Unlike previous studies, this left anterior STC activity supported self speech rather than other speech processing. Furthermore, right mid and anterior STC was more involved in other speech processing. These results signify specific mechanisms for self and other speech processing in the left and right STC beyond a more general speech processing in PTe. Third, other speech recognition in the context of listening to recorded self speech in experiment 2 led to largely symmetric activity in STC and additionally in inferior frontal subregions. The latter was previously reported to be generally relevant for other speech perception and classification, but we found frontal activity only when other speech classification was challenged by recorded but not by active self speech samples. Altogether, unlike formerly established brain networks for uncompetitive other speech perception, active self speech during other speech perception seemingly leads to a neural reordering, functional reassignment, and unusual lateralization of AC and frontal brain activations.ISSN:1053-8119ISSN:1095-957

    Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls

    No full text
    Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.ISSN:1544-9173ISSN:1545-788

    Affective speech modulates a cortico-limbic network in real time

    No full text
    Affect signaling in human communication involves cortico-limbic brain systems for affect information decoding, such as expressed in vocal intonations during affective speech. Both, the affecto-acoustic speech profile of speakers and the cortico-limbic affect recognition network of listeners were previously identified using non-social and non-adaptive research protocols. However, these protocols neglected the inherent socio-dyadic nature of affective communication, thus underestimating the real-time adaptive dynamics of affective speech that maximize listeners’ neural effects and affect recognition. To approximate this socio-adaptive and neural context of affective communication, we used an innovative real-time neuroimaging setup that linked speakers’ live affective speech production with listeners’ limbic brain signals that served as a proxy for affect recognition. We show that affective speech communication is acoustically more distinctive, adaptive, and individualized in a live adaptive setting and more efficiently capitalizes on neural affect decoding mechanisms in limbic and associated networks than non-adaptive affective speech communication. Only live affective speech produced in adaption to listeners’ limbic signals was closely linked to their emotion recognition as quantified by speakers’ acoustics and listeners’ emotional rating correlations. Furthermore, while live and adaptive aggressive speaking directly modulated limbic activity in listeners, joyful speaking modulated limbic activity in connection with the ventral striatum that is, amongst others, involved in the processing of pleasure. Thus, evolved neural mechanisms for affect decoding seem largely optimized for interactive and individually adaptive communicative contexts.ISSN:1873-5118ISSN:0301-008
    corecore