5 research outputs found

    On the Perception of Dynamic Emotional Expressions: A Cross-Cultural Comparison

    No full text
    This work represents a first comprehensive study, with original new results, on the perception of dynamic visual and auditory emotional informationThis work explores the power of visual and vocal channels, in conveying emotional cues exploiting realistic, dynamic and mutually related emotional vocal and facial stimuli, and aims to report on a cross cultural comparison on how people from different Western Countries perceive emotional dynamic stimuli. The authors attempt to give an answer to the following questions evaluating the subjective perception of emotional states in the single (either visual or auditory channel) and the combined channels: - In a body-to-body interaction, the addressee exploits both the verbal and non-verbal communication modes to infer the speaker’s emotional state. Is such an informational content redundant? - Is the amount of information conveyed by each communication mode the same or is it different? - How much information about the speaker’s emotional state is conveyed by each mode and is there a preferential communication mode for a given emotional state? -To what extent the cultural specificity affect the decoding of the emotional information? The results are interpreted in terms of cognitive load, language expertise and stimulus dynamics. This book will be of interest to researchers and scholars in the field of Human Computer Interaction, Affective Computing, Psychology, Social Sciences

    Emotional vocal expressions recognition using the COST 2102 Italian database of emotional speech

    No full text
    The present paper proposes a new speaker-independent approach to the classification of emotional vocal expressions by using the COST 2102 Italian database of emotional speech. The audio records extracted from video clips of Italian movies possess a certain degree of spontaneity and are either noisy or slightly degraded by an interruption making the collected stimuli more realistic in comparison with available emotional databases containing utterances recorded under studio conditions. The audio stimuli represent 6 basic emotional states: happiness, sarcasm/irony, fear, anger, surprise, and sadness. For these more realistic conditions, and using a speaker independent approach, the proposed system is able to classify the emotions under examination with 60.7% accuracy by using a hierarchical structure consisting of a Perceptron and fifteen Gaussian Mixture Models (GMM) trained to distinguish within each pair (couple) of emotions under examination. The best features in terms of high discriminative power were selected by using the Sequential Floating Forward Selection (SFFS) algorithm among a large number of spectral, prosodic and voice quality features. The results were compared with the subjective evaluation of the stimuli provided by human subjects

    Effects of emotional visual scenes on the ability to decode emotional melodies

    No full text
    An effective change in Human Computer Interaction requires to account of how communication practices are transformed in different contexts, how users sense the interaction with a machine, and an efficient machine sensitivity in interpreting users' communicative signals, and activities. To this aims, the present paper investigates on whether and how positive and negative visual scenes may alter listeners' ability to decode emotional melodies. Emotional tunes were played alone and with, either positive, or negative, or neutral emotional scenes. Afterword, subjects (8 groups, each of 38 subjects, equally balanced by gender) were asked to decode the emotional feeling aroused by melodies ascribing them either emotional valences (positive, negative, I don't know) or emotional labels (happy, sad, fear, anger, another emotion, I don't know). It was found that dimensional emotional features rather than emotional labels strongly affect cognitive judgements of emotional melodies. Musical emotional information is most effectively retained when the task is to assign labels rather than valence values to melodies. In addition, significant misperception effects are observed when happy or positively judged melodies are concurrently played with negative scenes
    corecore