2 research outputs found

    “Magic mirror in my hand, what is the sentiment in the lens?”: An action unit based approach for mining sentiments from multimedia contents

    Get PDF
    In psychology and philosophy, emotion is a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states. Emotion could be also considered as a “positive or negative experience” that is associated with a particular pattern of physiological activity. So, the extraction and recognition of emotions from multimedia contents is becoming one of the most challenging research topics in human–computer interaction. Facial expressions, posture, gestures, speech, emotive changes of physical parameters (e.g. body temperature, blush and changes in the tone of the voice) can reflect changes in the user's emotional state and all this kind of parameters can be detected and interpreted by a computer leading to the so-called “affective computing”. In this paper an approach for the extraction of emotions from images and videos will be introduced. In particular, it involves the adoption of action units' extraction from facial expression according to the Ekman theory. The proposed approach has been tested on standard and real datasets with interesting and promising results

    “Magic mirror in my hand, what is the sentiment in the lens?”: An action unit based approach for mining sentiments from multimedia contents

    No full text
    In psychology and philosophy, emotion is a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states. Emotion could be also considered as a “positive or negative experience” that is associated with a particular pattern of physiological activity. So, the extraction and recognition of emotions from multimedia contents is becoming one of the most challenging research topics in human–computer interaction. Facial expressions, posture, gestures, speech, emotive changes of physical parameters (e.g. body temperature, blush and changes in the tone of the voice) can reflect changes in the user's emotional state and all this kind of parameters can be detected and interpreted by a computer leading to the so-called “affective computing”. In this paper an approach for the extraction of emotions from images and videos will be introduced. In particular, it involves the adoption of action units' extraction from facial expression according to the Ekman theory. The proposed approach has been tested on standard and real datasets with interesting and promising results
    corecore