762 research outputs found

    The Look of Fear from the Eyes Varies with the Dynamic Sequence of Facial Actions

    Get PDF
    Most research on the ability to interpret expressions from the eyes has utilized static information. This research investigates whether the dynamic sequence of facial actions in the eye region influences the judgments of perceivers. Dynamic fear expressions involving the eye region and eyebrows were created which systematically differed in the sequential occurrence of facial actions. Participants rated the intensity of sequential fear expressions, either in addition to a simultaneous, full-blown expression (Experiment 1) or in combination with different levels of eye gaze (Experiment 2). The results showed that the degree of attributed emotion and the appraisal ratings differed as a function of the sequence of facial expressions of fear, with direct gaze resulting in stronger subjective responses. The findings challenge current notions surrounding the study of static facial displays from the eyes and suggest that emotion perception is a dynamic process shaped by the time course of the facial actions of an expression. Possible implications for the field of affective computing and clinical research are discussed

    Introducing the GEneva Music-Induced Affect Checklist (GEMIAC): A Brief Instrument for the Rapid Assessment of Musically Induced Emotions

    Get PDF
    The systematic study of music-induced emotions requires standardized measurement instruments to reliably assess the nature of affective reactions to music, which tend to go beyond garden-variety basic emotions. We describe the development and conceptual validation of a checklist for rapid assessment of music-induced affect, designed to extend and complement the Geneva Emotional Music Scale. The checklist contains a selection of affect and emotion categories that are frequently used in the literature to refer to emotional reactions to music. The development of the checklist focused on an empirical investigation of the semantic structure of the relevant terms, combined with fuzzy classes based on a series of hierarchical cluster analyses. Two versions of the checklist for assessing the intensity and frequency of affective responses to music are proposed

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    Emotion-Antecedent Appraisal Checks: EEG and EMG datasets for Goal Conduciveness, Control and Power

    Get PDF
    This document describes the full details of the second data set (Study 2) used in Coutinho et al., to appear. The Electroencephalography (EEG) and facial Electromyography (EMG) signals included in this data set, and now made public, were collected in the context of a previous study by Gentsch, Grandjean, and Scherer, 2013 that addressed three fundamental questions regarding the mechanisms underlying the appraisal process: Whether appraisal criteria are processed (1) in a fixed sequence, (2) independent of each other, and (3) by different neural structures or circuits. In this study, a gambling task was applied in which feedback stimuli manipulated simultaneously the information about goal conduciveness, control, and power appraisals. EEG was recorded during task performance, together with facial EMG, to measure, respectively, cognitive processing and efferent responses stemming from the appraisal manipulations

    The munich LSTM-RNN approach to the MediaEval 2014 "Emotion in Music" Task

    Get PDF
    In this paper we describe TUM's approach for the MediaEval's \Emotion in Music" task. The goal of this task is to automatically estimate the emotions expressed by music (in terms of Arousal and Valence) in a time-continuous fashion. Our system consists of Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression. We used two di erent sets of acoustic and psychoacoustic features that have been previously proven as e ective for emotion prediction in music and speech. The best model yielded an average Pearson's correlation coe-cient of 0.354 (Arousal) and 0.198 (Valence), and an average Root Mean Squared Error of 0.102 (Arousal) and 0.079 (Valence)

    FACSGen 2.0 animation software: Generating 3D FACS-valid facial expressions for emotion research

    Get PDF
    In this article, we present FACSGen 2.0, new animation software for creating static and dynamic three-dimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants' recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and neuroscience research

    Emotion-Antecedent Appraisal Checks: EEG and EMG datasets for Novelty and Pleasantness [Data set]

    Get PDF
    This document describes the full details of the first data set (Study 1) used in Coutinho et al., to appear. The Electroencephalography (EEG) and facial Electromyography (EMG) signals included in this dataset, and now made public, were collected in the context of a previous study by Peer, Grandjean, and Scherer, 2014 that addressed three fundamental questions regarding the mechanisms underlying the appraisal process: Whether appraisal criteria are processed (a) in a fixed sequence, (b) independent of each other, and (c) by different neural structures or circuits. In that study, an oddball paradigm with affective pictures was used to experimentally manipulate novelty and intrinsic pleasantness appraisals. EEG was recorded during task performance, together with facial EMG, to measure, respectively, cognitive processing and efferent responses stemming from the appraisal manipulations
    • …
    corecore