3 research outputs found

    Heart Rate Responses to Synthesized Affective Spoken Words

    No full text
    The present study investigated the effects of brief synthesized spoken words with emotional content on the ratings of emotions and heart rate responses. Twenty participants' heart rate functioning was measured while they listened to a set of emotionally negative, neutral, and positive words produced by speech synthesizers. At the end of the experiment, ratings of emotional experiences were also collected. The results showed that the ratings of the words were in accordance with their valence. Heart rate deceleration was significantly the strongest and most prolonged to the negative stimuli. The findings are the first suggesting that brief spoken emotionally toned words evoke a similar heart rate response pattern found earlier for more sustained emotional stimuli

    Physically emotional: the role of embodied emotions from encoding to memory

    Get PDF
    Theories of embodied cognition hold that the perception of an emotional stimulus can trigger a simulation of the correspondent state in the motor, somatosensory, and affective systems. Amongst other bodily reactions, it is thought that such embodied simulations are also reflected in facial expressions in accordance to the emotional connotation of the presented stimulus \u2013 a phenomenon also referred to as facial motor resonance. Chapter 1 reviews the theories of embodied cognition, in general, and facial motor resonance, in particular. The aim of the present thesis was to further define the function of embodied simulations, reconciling previous inconsistent results concerning the level at which embodied simulations affect the processing of emotional information, and to explore uncharted aspects of embodiment theories such as their role in memory for emotional information. In Chapter 2, I investigated the hypothesis that embodied simulations play a key role in processing only emotional information (happy and sad sentences and faces), which is low in emotional intensity or difficult to encode. This hypothesis was tested in a behavioral experiment involving a group of participants undergoing subcutaneous cosmetic injections of Botulinum Toxin-A (Botox) compared with a matched control group. The results confirmed the hypothesis: participants in the Botox group, but not those in the control group, rated emotional sentences and faces as less emotional after the Botox treatment. Furthermore, they were slower at identifying sad faces as sad after the treatment. The critical nuance of these findings was that only stimuli with moderate emotional intensity were affected. Upon considering the findings of Chapter 2, the question arose as to whether facial motor resonance, in addition to playing a role in the initial processing and recognition of emotional content, also determines its retrieval. This topic was investigated in the study reported in Chapter 3, in which eighty participants underwent a memory task for emotional and neutral words. The task consisted of an encoding and a retrieval phase. Facial muscles were blocked by a hardening facial mask in one of four conditions: during encoding, during retrieval, during both encoding and retrieval, or never (control). The results showed that memory for emotional words decreased significantly if embodiment was blocked at either point in time during the experiment (during encoding, during retrieval, or during both), in contrast to the control condition. These results suggest that facial motor resonance is involved in the encoding and retrieval of emotional words. In Chapter 4, this line of research was extended and applied to the processing of emotional content in a second language (L2). In a classical memory task involving an encoding and a retrieval phase, thirty-two Spanish/English late bilinguals were presented with emotional (happy and angry) and neutral words. Electromyographic (EMG) activity and skin conductance (SC) were recorded during the encoding phase. The results suggest that the emotionality of an L2 appears to be not only reduced as compared with a first language (L1), but also to be less embodied. This was suggested both by the absence of the Enhanced Emotional Memory (EEM) effect in L2 as well as by partially decreased and delayed EMG and SC activity in response to emotional words in L2 as compared with L1. If facial motor resonance is involved in the recollection of emotional information, what is its role in forgetting emotional information? This question was pursued in the study reported in Chapter 5, employing the directed forgetting paradigm (DF), which involves the presentation of a stimulus (e.g. a word), followed by a cue to \u201cremember\u201d (R-cue) or to \u201cforget\u201d (F-cue). Twenty-one participants were instructed to remember or to intentionally forget neutral, negative, and positive words. EMG from the zygomaticus and corrugator muscle was simultaneously recorded with event related potentials (ERPs). The behavioral results showed that both neutral and emotional words were forgotten at equal rates. However, the type of word and cue instruction interactively modulated facial motor resonance, as measured by EMG. Upon R-cues, the muscle activation patterns for both negative and positive word types were significantly enhanced, in contrast to the facial motor resonance evoked by F-cues. It was speculated that the increase in facial motor resonance reflects active rehearsal, whereas the decrease is associated to active suppression mechanisms. This assumption was supported by the ERP data, indicating that the successful forgetting of affective words required more active suppression, as was indexed by enhanced frontal positivities. In contrast, intentional encoding of emotional words followed by R-cues seemed to be facilitated by an enhanced P3 and late positive potential (LPP) components emerging from centro-parietal areas. These components have been hypothesized to reflect rehearsal and memory consolidation processes. Overall, the present results suggest that embodied simulations help with the processing of indefinite emotional information and assist with the formation of enduring representations of emotional stimuli. The implications of these findings for theories of embodied cognition, in general, and for emotion processing, in particular, are discussed in Chapter 6

    The Perception of Emotion from Acoustic Cues in Natural Speech

    Get PDF
    Knowledge of human perception of emotional speech is imperative for the development of emotion in speech recognition systems and emotional speech synthesis. Owing to the fact that there is a growing trend towards research on spontaneous, real-life data, the aim of the present thesis is to examine human perception of emotion in naturalistic speech. Although there are many available emotional speech corpora, most contain simulated expressions. Therefore, there remains a compelling need to obtain naturalistic speech corpora that are appropriate and freely available for research. In that regard, our initial aim was to acquire suitable naturalistic material and examine its emotional content based on listener perceptions. A web-based listening tool was developed to accumulate ratings based on large-scale listening groups. The emotional content present in the speech material was demonstrated by performing perception tests on conveyed levels of Activation and Evaluation. As a result, labels were determined that signified the emotional content, and thus contribute to the construction of a naturalistic emotional speech corpus. In line with the literature, the ratings obtained from the perception tests suggested that Evaluation (or hedonic valence) is not identified as reliably as Activation is. Emotional valence can be conveyed through both semantic and prosodic information, for which the meaning of one may serve to facilitate, modify, or conflict with the meaning of the other—particularly with naturalistic speech. The subsequent experiments aimed to investigate this concept by comparing ratings from perception tests of non-verbal speech with verbal speech. The method used to render non-verbal speech was low-pass filtering, and for this, suitable filtering conditions were determined by carrying out preliminary perception tests. The results suggested that nonverbal naturalistic speech provides sufficiently discernible levels of Activation and Evaluation. It appears that the perception of Activation and Evaluation is affected by low-pass filtering, but that the effect is relatively small. Moreover, the results suggest that there is a similar trend in agreement levels between verbal and non-verbal speech. To date it still remains difficult to determine unique acoustical patterns for hedonic valence of emotion, which may be due to inadequate labels or the incorrect selection of acoustic parameters. This study has implications for the labelling of emotional speech data and the determination of salient acoustic correlates of emotion
    corecore