8,063 research outputs found

    Towards the improvement of self-service systems via emotional virtual agents

    Get PDF
    Affective computing and emotional agents have been found to have a positive effect on human-computer interactions. In order to develop an acceptable emotional agent for use in a self-service interaction, two stages of research were identified and carried out; the first to determine which facial expressions are present in such an interaction and the second to determine which emotional agent behaviours are perceived as appropriate during a problematic self-service shopping task. In the first stage, facial expressions associated with negative affect were found to occur during self-service shopping interactions, indicating that facial expression detection is suitable for detecting negative affective states during self-service interactions. In the second stage, user perceptions of the emotional facial expressions displayed by an emotional agent during a problematic self-service interaction were gathered. Overall, the expression of disgust was found to be perceived as inappropriate while emotionally neutral behaviour was perceived as appropriate, however gender differences suggested that females perceived surprise as inappropriate. Results suggest that agents should change their behaviour and appearance based on user characteristics such as gender

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    On combining the facial movements of a talking head

    Get PDF
    We present work on Obie, an embodied conversational agent framework. An embodied conversational agent, or talking head, consists of three main components. The graphical part consists of a face model and a facial muscle model. Besides the graphical part, we have implemented an emotion model and a mapping from emotions to facial expressions. The animation part of the framework focuses on the combination of different facial movements temporally. In this paper we propose a scheme of combining facial movements on a 3D talking head

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    A Trip to the Moon: Personalized Animated Movies for Self-reflection

    Full text link
    Self-tracking physiological and psychological data poses the challenge of presentation and interpretation. Insightful narratives for self-tracking data can motivate the user towards constructive self-reflection. One powerful form of narrative that engages audience across various culture and age groups is animated movies. We collected a week of self-reported mood and behavior data from each user and created in Unity a personalized animation based on their data. We evaluated the impact of their video in a randomized control trial with a non-personalized animated video as control. We found that personalized videos tend to be more emotionally engaging, encouraging greater and lengthier writing that indicated self-reflection about moods and behaviors, compared to non-personalized control videos

    Synthesis and Control of High Resolution Facial Expressions for Visual Interactions

    Full text link
    The synthesis of facial expression with control of intensity and personal styles is important in intelligent and affective human-computer interaction, especially in face-to-face inter-action between human and intelligent agent. We present a facial expression animation system that facilitates control of expressiveness and style. We learn a decomposable genera-tive model for the nonlinear deformation of facial expressions by analyzing the mapping space between low dimensional embedded representation and high resolution tracking data. Bilinear analysis of the mapping space provides a compact representation of the nonlinear generative model for facial expressions. The decomposition allows synthesis of new fa-cial expressions by control of geometry and expression style. The generative model provides control of expressiveness pre-serving nonlinear deformation in the expressions with simple parameters and allows synthesis of stylized facial geometry. In addition, we can directly extract the MPEG-4 Facial Ani-mation Parameters (FAPs) from the synthesized data, which allows using any animation engine that supports FAPs to ani-mate new synthesized expressions. 1

    Evaluating Engagement in Digital Narratives from Facial Data

    Get PDF
    Engagement researchers indicate that the engagement level of people in a narrative has an influence on people's subsequent story-related attitudes and beliefs, which helps psychologists understand people's social behaviours and personal experience. With the arrival of multimedia, the digital narrative combines multimedia features (e.g. varying images, music and voiceover) with traditional storytelling. Research on digital narratives has been widely used in helping students gain problem-solving and presentation skills as well as supporting child psychologists investigating children's social understanding such as family/peer relationships through completing their digital narratives. However, there is little study on the effect of multimedia features in digital narratives on the engagement level of people. This research focuses on measuring the levels of engagement of people in digital narratives and specifically on understanding the media effect of digital narratives on people's engagement levels. Measurement tools are developed and validated through analyses of facial data from different age groups (children and young adults) in watching stories with different media features of digital narratives. Data sources used in this research include a questionnaire with Smileyometer scale and the observation of each participant's facial behaviours
    corecore