698 research outputs found

    Affective iconic words benefit from additional sound–meaning integration in the left amygdala

    Get PDF
    Recent studies have shown that a similarity between sound and meaning of a word (i.e., iconicity) can help more readily access the meaning of that word, but the neural mechanisms underlying this beneficial role of iconicity in semantic processing remain largely unknown. In an fMRI study, we focused on the affective domain and examined whether affective iconic words (e.g., high arousal in both sound and meaning) activate additional brain regions that integrate emotional information from different domains (i.e., sound and meaning). In line with our hypothesis, affective iconic words, compared to their non‐iconic counterparts, elicited additional BOLD responses in the left amygdala known for its role in multimodal representation of emotions. Functional connectivity analyses revealed that the observed amygdalar activity was modulated by an interaction of iconic condition and activations in two hubs representative for processing sound (left superior temporal gyrus) and meaning (left inferior frontal gyrus) of words. These results provide a neural explanation for the facilitative role of iconicity in language processing and indicate that language users are sensitive to the interaction between sound and meaning aspect of words, suggesting the existence of iconicity as a general property of human language

    Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody

    Get PDF
    This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450–700 ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice

    The Neurocognition of Prosody

    Get PDF
    Prosody is one of the most undervalued components of language, despite its fulfillment of manifold purposes. It can, for instance, help assign the correct meaning to compounds such as “white house” (linguistic function), or help a listener understand how a speaker feels (emotional function). However, brain-based models that take into account the role prosody plays in dynamic speech comprehension are still rare. This is probably due to the fact that it has proven difficult to fully denote the neurocognitive architecture underlying prosody. This review discusses clinical and neuroscientific evidence regarding both linguistic and emotional prosody. It will become obvious that prosody processing is a multistage operation and that its temporally and functionally distinct processing steps are anchored in a functionally differentiated brain network

    Valence, arousal, and task effects in emotional prosody processing.

    Get PDF
    Previous research suggests that emotional prosody processing is a highly rapid and complex process. In particular, it has been shown that different basic emotions can be differentiated in an early event-related brain potential (ERP) component, the P200. Often, the P200 is followed by later long lasting ERPs such as the late positive complex. The current experiment set out to explore in how far emotionality and arousal can modulate these previously reported ERP components. In addition, we also investigated the influence of task demands (implicit vs. explicit evaluation of stimuli). Participants listened to pseudo-sentences (sentences with no lexical content) spoken in six different emotions or in a neutral tone of voice while they either rated the arousal level of the speaker or their own arousal level. Results confirm that different emotional intonations can first be differentiated in the P200 component, reflecting a first emotional encoding of the stimulus possibly including a valence tagging process. A marginal significant arousal effect was also found in this time-window with high arousing stimuli eliciting a stronger P200 than low arousing stimuli. The P200 component was followed by a long lasting positive ERP between 400 and 750 ms. In this late time-window, both emotion and arousal effects were found. No effects of task were observed in either time-window. Taken together, results suggest that emotion relevant details are robustly decoded during early processing and late processing stages while arousal information is only reliably taken into consideration at a later stage of processing

    Neural correlates of the affective properties of spontaneous and volitional laughter types

    Get PDF
    Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative

    Mark My Words: Tone of Voice Changes Affective Word Representations in Memory

    Get PDF
    The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents

    The Influence of Emotional Content on Event-Related Brain Potentials during Spoken Word Processing

    Get PDF
    In unserem alltĂ€glichen Leben ist Sprache ein unerlĂ€ssliches Mittel fĂŒr Kommunikation und die Umsetzung sozialer Interaktionen. Sprache kann in zwei verschiedene ModalitĂ€ten unterteilt werden, in die auditorische und die visuelle ModalitĂ€t. Die auditorische ModalitĂ€t umfasst gesprochene Sprache, wohingegen die visuelle ModalitĂ€t vom geschriebenen Teil der Sprache gebildet wird. Auch wenn ein Tag ohne Sprechen fĂŒr die meisten von uns unvorstellbar ist, hat die bisherige Forschung die Untersuchung von Effekten bei der Verarbeitung von emotionalem Bedeutungsinhalt in gesprochener Sprache, im Gegensatz zu der Verarbeitung von geschriebener Sprache, vernachlĂ€ssigt. Die Verarbeitung des emotionalen Bedeutungsinhalts von geschriebenen Wörtern hat eine Vielzahl von Studien mit Hilfe von ereigniskorrelierten Potentialen (EKPs) ausfĂŒhrlich untersucht. Im Gegensatz dazu wurde der emotionale Bedeutungsinhalt bei der Verarbeitung von gesprochener Sprache nur gelegentlich und meist entweder in seiner Interaktion mit emotionaler Prosodie oder fokussiert auf die Existenz einer spezifischen EKP Komponente untersucht. Daher bleibt die Frage offen, wie und an welchen Verarbeitungsschritten der emotionale Inhalt gesprochener Sprache ereigniskorrelierte Potentiale beeinflusst, unabhĂ€ngig von emotionaler Prosodie und der Frage, ob Gemeinsamkeiten mit der Verarbeitung von geschriebenen emotionalen Wörtern bestehen. In dieser Dissertation untersuche ich die Verarbeitung von gesprochenen Einzelwörtern mit positivem, neutralem und negativem Inhalt, mit der erkenntnisleitenden Fragestellung, ob der emotionale Inhalt von gesprochenen Wörtern Emotionseffekte in EKPs hervorruft und ob diese vergleichbar sind zu denen, die fĂŒr geschriebene Wörter gezeigt wurden. In der ersten dieser Dissertation zugrundeliegenden Studie wurden gesprochene Wörter mit emotionalem und neutralem Inhalt den Versuchspersonen in zwei verschiedenen LautstĂ€rken prĂ€sentiert, um mögliche Interaktionen mit bottom-up Aufmerksamkeitseffekten, geleitet durch die GrĂ¶ĂŸe des Stimulus, zu erklĂ€ren. FĂŒr visuelle Stimuli mit emotionalem Inhalt, wie Bilder oder geschriebene Wörter, hat die GrĂ¶ĂŸe des Stimulus erhöhte emotions-bedingte EKPs hervorgerufen, zum Beispiel auf der Ebene der early posterior negativity (EPN). Es wurde untersucht, ob diese erhöhte Relevanz von grĂ¶ĂŸeren visuellen Stimuli auf die auditorische ModalitĂ€t ĂŒbertragbar sein könnte. Negativer emotionaler Bedeutungsinhalt fĂŒhrt zu einer erhöhten frontalen Positivierung und einer parieto-okzipitalen Negativierung zwischen 370 und 530 Millisekunden. Diese Komponente zeigt Ähnlichkeit mit der visuellen EPN, obwohl sich die Negativierung zu zentraleren Arealen der KopfoberflĂ€che ausweitet. Daher stellt sich die Frage, ob diese Komponente das auditorische Pendant zu einer visuellen EPN darstellen könnte. Entscheidend ist hier, dass keine Interaktion dieser emotions-bedingten EKP Komponente mit dem LautstĂ€rkefaktor beobachtet werden kann. Die folgenden Vergleichsaspekte deuten auf umfassendere Unterschiede zwischen visueller und auditorischer Sprachverarbeitung hin: die fehlende Interaktion zwischen der GrĂ¶ĂŸe des Stimulus und der Emotionseffekte, die Unterschiede in den Topographien der Emotionseffekte sowie unterschiedliche Latenzen verglichen zu der visuellen EPN. Der zweite Teil dieser Dissertation ist auf einen direkteren Vergleich von Emotionseffekten in der visuellen und auditorischen ModalitĂ€t ausgerichtet. Zu diesem Zweck wurde eine zweite Studie durchgefĂŒhrt, in der Versuchspersonen dieselben Wörter in geschriebener und gesprochener ModalitĂ€t prĂ€sentiert bekamen. Die gesprochenen Wörter wurden dabei sowohl von einer Computerstimme (Experiment 1) als auch von einer menschlichen Stimme (Experiment 2) produziert. Diese Studie wurde konzipiert, um die Existenz einer „auditorischen EPN“ und ihre Randbedingungen zu untersuchen. DarĂŒber hinaus sollte die These ĂŒberprĂŒft werden, ob die höhere soziale Relevanz einer menschlichen Stimme die Emotionseffekte vergrĂ¶ĂŸert. In beiden Experimenten zeigen sich Emotionseffekte. FĂŒr geschriebene Wörter zwischen 230 und 400 Millisekunden, im Zeitbereich der early posterior negativity, fĂŒr gesprochene Wörter zwischen 460 und 510 Millisekunden. Wenn man die Verteilung der EKP Differenzen zwischen emotionalen und neutralen auditorischen Wörtern berĂŒcksichtigt, zeigen die Effekte interessanterweise sogar eine grĂ¶ĂŸere Ähnlichkeit mit der visuellen EPN als die Ergebnisse des ersten Teils dieser Dissertation. Eine Quellenlokalisierung ergab vergleichbare neuronale Generatoren im superioren parietalen Lobus (SPL) und im inferioren temporalen Lobus (IPL), sowohl im visuellen als auch im „auditorischen EPN“ Zeitfenster. Diese Befunde deuten auf Gemeinsamkeiten in der Verarbeitung emotionaler Inhalte ĂŒber die ModalitĂ€ten hinweg hin, die – zumindest teilweise – durch das gleiche neuronale System gestĂŒtzt werden. Trotzdem erscheinen diese Gemeinsamkeiten ĂŒberraschend, da fĂŒr die visuelle EPN angenommen wird, dass sie eine verstĂ€rkte sensorische Enkodierung fĂŒr emotionale Stimuli in visuellen Arealen abbildet. Die oben beschriebenen und in diesen Studien gezeigten Emotionseffekte unterscheiden sich bezĂŒglich ihrer Latenzen, Topographien und der Valenz, welche den Effekt hervorruft (positiv oder negativ). Im letzten Teil der Dissertation wurden daher systematisch Unterschiede zwischen den Studien untersucht um potenzielle Ursachen fĂŒr die oben aufgefĂŒhrten Unterschiede in den Emotionseffekten bestimmen zu können. Es zeigen sich Geschlechterunterschiede in den Topographien in Studie 2, die jedoch nicht die gefundenen Unterscheide in den Emotionseffekten zwischen den beiden Studien erklĂ€ren können. Es wird angenommen, dass beide Studien die gleiche auditorische emotions-bedingte Komponente (AEK) in einem vergleichbaren Zeitfenster (Studie 1: 477 530 ms; Studie 2: 464 515 ms) hervorrufen, welcher in der ersten Studie eine N400-Ă€hnlichen Verteilung vorausgegangen ist. Obwohl keine Interaktionen zwischen emotionalem Inhalt und LautstĂ€rke aufgezeigt werden können, gehe ich davon aus, dass die Manipulation der LautstĂ€rke in der ersten Studie den Kontext des Experiments verĂ€ndert, und so den frĂŒheren Effekt ausgelöst hat. Auch wenn keine verifizierbaren Ursachen fĂŒr die beschriebenen Unterschiede zwischen den Emotionseffekten aufgezeigt werden konnten, ist es mir mit dieser Dissertation gelungen, die Existenz einer auditorischen emotions-bedingten Komponente zu zeigen, die durch emotionalen (in Vergleich zu neutralem) Inhalt wĂ€hrend der Verarbeitung von gesprochener Sprache hervorgerufen wird. Diese Komponente spiegelt sich in einer anterioren Positivierung und einer posterioren Negativierung zwischen 460 und 520 Millisekunden nach Wortbeginn wider. Diese zeigt sich gleichbleibend, unabhĂ€ngig von der sozialen Signifikanz der Stimme des Sprechers oder der Manipulation der LautstĂ€rke. BezĂŒglich eines Vergleich des zugrundeliegenden neuronalen Netzwerkes wĂ€hrend der Verarbeitung des Inhalts von gesprochenen und geschriebenen Wörtern, kann man annehmen, dass die Verarbeitung Hirnareale aktiviert, die zumindest teilweise im SPL und IPL liegen. Obwohl die Verteilung der AEK eine hohe Ähnlichkeit zur visuellen EPN aufzeigt, kann man nicht annehmen, dass dieser Effekt ein auditorisches Pendant darstellt. Diese Schlussfolgerung beruht darauf, dass sich eine typische EPN-Verteilung nur bei der Berechnung der Differenzkurven von emotionalen und neutralen Stimuli zeigt. Die daraus resultierende posteriore Negativierung spiegelt eine erhöhte Aktivierung von visuellen Arealen - hervorgerufen durch emotionale Stimuli - wider. Die Analyse der zugrundeliegenden neuronalen Generatoren fĂŒr den Unterschied zwischen auditorischen emotionalen und neutralen Stimuli liefert keine signifikanten Ergebnisse. Trotzdem zeigen die zugrundeliegenden Topographien der einzelnen Emotionskategorien, dass die Gemeinsamkeit auf der Ebene der Differenzkurven aus völlig unterschiedlichen Verteilungen resultiert. ZukĂŒnftige Forschung mĂŒsste das auditorische Stimulusmaterial bezĂŒglich der WortlĂ€nge oder des Worterkennungspunktes strikter kontrollieren, um den zeitlichen Jitter in den Daten zu reduzieren und somit die neuronalen Generatoren einer auditorischen emotions-bedingten Komponente besser bestimmen zu können

    Effects of Trait Anxiety on Threatening Speech Processing: Implications for Models of Emotional Language and Anxiety

    Get PDF
    Speech can convey emotional meaning through different channels, two are regarded as particularly relevant in models of emotional language: prosody and semantics. These have been widely studied in terms of their production and processing aspects, but sometimes overlooking individual differences of listeners. The present thesis examines whether greater intrinsic levels of anxiety can affect threatening speech processing. Trait anxiety is the predisposition to increased cognitions such as worry (over-thinking of the future), and emotions such as angst (felling of discomfort and tension), and can be reflected by an overactive behavioural inhibition system. As a result, according to emotional language and anxiety models, emotional prosody/semantics and anxiety might have overlapping neural areas/routes and processing phases. Thus, threatening semantics or prosody could have differential effects on trait anxiety depending on the nature of this overlap. This problem is approached by using behavioural and electroencephalographic (EEG) measures. Three dichotic listening experiments demonstrate that, at the behavioural level, trait anxiety does not modulate lateralisation when stimuli convey threatening prosody, threatening semantics or both. However, these and another non-dichotic experiment indicate that greater anxiety induces substantially slower responses. An EEG experiment shows that this phenomenon has very clear neural signature at late processing phases (~600ms). Exploratory source localisation analyses indicate involvement of areas predicted by the models, including portions of limbic, temporal and prefrontal cortex. The proposed explanation is that threatening speech can induce anxious people to over-engage with stimuli, and this disrupts late-phase processes associated with orientation/deliberation, as proposed by anxiety models. This process is independent of information type until later phase occurring after speech comprehension (e.g. response preparation/execution). Given this, a new model of threatening language processing is proposed, which extends models of emotional language processing by incorporating an orientation/deliberation phase from anxiety models

    The automatic processing of non-verbal emotional vocalizations: an electrophysiological investigation

    Get PDF
    Dissertação de mestrado integrado em Psicologia (ĂĄrea de especialização em Psicologia ClĂ­nica e da SaĂșde)The human voice is a critical channel for the exchange of information about the emotionality of a speaker. In this sense, it is important to investigate the neural correlates of non-verbal vocalizations processing, even when listeners are not attending to these events. We developed an oddball paradigm in which emotional (happy and angry) and neutral vocalizations were presented both as standard and deviant stimuli in four conditions: Happy, Angry, Neutral 1 (neutral vocalizations with angry context), and Neutral 2 (neutral vocalizations with happy context). To unfold the time course of the auditory change detection mechanisms indexed by the Mismatch Negativity (MMN) component, the Event–Related Potentials (ERP) methodology was used. ERPs were recorded in 17 healthy subjects. The results showed that Happy and Neutral 2 conditions elicited more negative MMN amplitude relative to the Angry condition, at midline (Fz, Cz) electrodes. Overall results suggest that automatic auditory change detection is enhanced for positive and neutral (in happy context) vocalizations than for negative stimuli.A voz humana Ă© um canal vital na troca de informação sobre a emocionalidade do outro. Neste sentido, Ă© importante investigar quais os correlatos neuronais associados ao processamento de vocalizaçÔes nĂŁo-verbais, mesmo quando nĂŁo Ă© alocada atenção a estes estĂ­mulos. Foi criado um paradigma oddball com vocalizaçÔes emocionais (alegria e raiva) e neutras, que eram apresentadas como estĂ­mulos frequentes ou infrequentes em quatro condiçÔes distintas: Alegre, Raiva, Neutro 1 (vocalizaçÔes neutras em contexto de raiva) e Neutro 2 (vocalizaçÔes neutras em contexto de alegria). Para investigar o curso temporal dos mecanismos automĂĄticos de detecção de mudança auditiva, foi usada a tĂ©cnica de Potenciais Evocados e estudado o componente Mismatch Negativity (MMN). A amostra foi constituĂ­da por 17 indivĂ­duos saudĂĄveis. Os resultados mostraram que as condiçÔes Alegre e Neutro 2 elicitaram uma amplitude de MMN mais negativa comparativamente com a condição Raiva, para os elĂ©ctrodos situados na linha mĂ©dia do escalpe (Fz, Cz). Estes resultados indicam que existe um mecanismo neuronal de deteção de mudança auditiva mais pronunciado para vocalizaçÔes positivas e neutras (em contexto de alegria) comparativamente com vocalizaçÔes negativas

    “It's Not What You Say, But How You Say it”: A Reciprocal Temporo-frontal Network for Affective Prosody

    Get PDF
    Humans communicate emotion vocally by modulating acoustic cues such as pitch, intensity and voice quality. Research has documented how the relative presence or absence of such cues alters the likelihood of perceiving an emotion, but the neural underpinnings of acoustic cue-dependent emotion perception remain obscure. Using functional magnetic resonance imaging in 20 subjects we examined a reciprocal circuit consisting of superior temporal cortex, amygdala and inferior frontal gyrus that may underlie affective prosodic comprehension. Results showed that increased saliency of emotion-specific acoustic cues was associated with increased activation in superior temporal cortex [planum temporale (PT), posterior superior temporal gyrus (pSTG), and posterior superior middle gyrus (pMTG)] and amygdala, whereas decreased saliency of acoustic cues was associated with increased inferior frontal activity and temporo-frontal connectivity. These results suggest that sensory-integrative processing is facilitated when the acoustic signal is rich in affective information, yielding increased activation in temporal cortex and amygdala. Conversely, when the acoustic signal is ambiguous, greater evaluative processes are recruited, increasing activation in inferior frontal gyrus (IFG) and IFG STG connectivity. Auditory regions may thus integrate acoustic information with amygdala input to form emotion-specific representations, which are evaluated within inferior frontal regions
    • 

    corecore