316 research outputs found
Social power and recognition of emotional prosody: High power is associated with lower recognition accuracy than low power
Listeners have to pay close attention to a speaker’s tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. While a growing body of research has explored how emotions are processed from speech in general, little is known about how psycho-social factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional language differently
Histaminylation of glutamine residues is a novel posttranslational modification implicated in G-protein signaling
Posttranslational modifications (PTM) have been shown to be essential for protein function and signaling. Here we report the identification of a novel modification, protein transfer of histamine, and provide evidence for its function in G protein signaling. Histamine, known as neurotransmitter and mediator of the inflammatory response, was found incorporated into mastocytoma proteins. Histaminylation was dependent on transglutaminase II. Mass spectrometry confirmed histamine modification of the small and heterotrimeric G proteins Cdc42, Galphao1 and Galphaq. The modification was specific for glutamine residues in the catalytic core, and triggered their constitutive activation. TGM2-mediated histaminylation is thus a novel PTM that functions in G protein signaling. Protein alphamonoaminylations, thus including histaminylation, serotonylation, dopaminylation and norepinephrinylation, hence emerge as a novel class of regulatory PTMs
Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia
The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages
Morphological encoding beyond slots and fillers: An ERP study of comparative formation in English
One important organizational property of morphology is competition. Different means of expression are in conflict with each other for encoding the same grammatical function. In the current study, we examined the nature of this control mechanism by testing the formation of comparative adjectives in English during language production. Event-related brain potentials (ERPs) were recorded during cued silent production, the first study of this kind for comparative adjective formation. We specifically examined the ERP correlates of producing synthetic relative to analytic comparatives, e.g. angrier vs. more angry. A frontal, bilaterally distributed, enhanced negative-going waveform for analytic comparatives (vis-a-vis synthetic ones) emerged approximately 300ms after the (silent) production cue. We argue that this ERP effect reflects a control mechanism that constrains grammatically-based computational processes (viz. more comparative formation). We also address the possibility that this particular ERP effect may belong to a family of previously observed negativities reflecting cognitive control monitoring, rather than morphological encoding processes per se
Vorkommen von Spurenelementen in Flusssedimenten und Grund- und Oberflächenwasser in der Bergbauregion von Gatumba, Ruanda
Aufgrund der intensiven Landnutzung durch Bergbau und Landwirtschaft sind die Gewässer im Gatumba Mining District durch Stoffaustrag aus Abraumhalden und Erosion stark geprägt. Untersuchungen während einer Trocken- und Regenzeit hinsichtlich der Konzentration von Spurenelemente haben gezeigt, dass von einer Gesundheitsgefährdung der lokalen Bevölkerung derzeit nicht ausgegangen werden kann. In der Regel weisen die Wasserproben der Trockenzeit gegenüber denen der Regenzeit tendenziell höhere Konzentrationen auf. Die Konzentrationen der Sedimente zeigen keinen entsprechenden Trend
Recognizing Emotions in a Foreign Language
Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables
Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces
Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions
Detection of emotions in Parkinson's disease using higher order spectral features from brain's electrical activity
Non-motor symptoms in Parkinson's disease (PD) involving cognition and emotion have been progressively receiving more attention in recent times. Electroencephalogram (EEG) signals, being an activity of central nervous system, can reflect the underlying true emotional state of a person. This paper presents a computational framework for classifying PD patients compared to healthy controls (HC) using emotional information from the brain's electrical activity
Emotional Cues during Simultaneous Face and Voice Processing: Electrophysiological Insights
Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region
- …