1,225 research outputs found

    Behavioural facial animation using motion graphs and mind maps

    Get PDF
    We present a new behavioural animation method that combines motion graphs for synthesis of animation and mind maps as behaviour controllers for the choice of motions, significantly reducing the cost of animating secondary characters. Motion graphs are created for each facial region from the analysis of a motion database, while synthesis occurs by minimizing the path distance that connects automatically chosen nodes. A Mind map is a hierarchical graph built on top of the motion graphs, where the user visually chooses how a stimulus affects the character's mood, which in turn will trigger motion synthesis. Different personality traits add more emotional complexity to the chosen reactions. Combining behaviour simulation and procedural animation leads to more emphatic and autonomous characters that react differently in each interaction, shifting the task of animating a character to one of defining its behaviour.</p

    Behavioural facial animation using motion graphs and mind maps

    Get PDF
    We present a new behavioural animation method that combines motion graphs for synthesis of animation and mind maps as behaviour controllers for the choice of motions, significantly reducing the cost of animating secondary characters. Motion graphs are created for each facial region from the analysis of a motion database, while synthesis occurs by minimizing the path distance that connects automatically chosen nodes. A Mind map is a hierarchical graph built on top of the motion graphs, where the user visually chooses how a stimulus affects the character's mood, which in turn will trigger motion synthesis. Different personality traits add more emotional complexity to the chosen reactions. Combining behaviour simulation and procedural animation leads to more emphatic and autonomous characters that react differently in each interaction, shifting the task of animating a character to one of defining its behaviour.</p

    Experimental studies of the interaction between people and virtual humans with a focus on social anxiety

    Get PDF
    Psychotherapy has been one of the major applications of Virtual Reality technology; examples include fear of flying, heights, spiders, and post‐traumatic stress disorder. Virtual reality has been shown to be useful, in the context of exposure therapy for the treatment of social anxiety, such as fear of public speaking, where the clients learn how to conquer their anxiety through interactions with Virtual Characters (avatars). This thesis is concerned with the interaction between human participants and avatars in a Virtual Environment (VE), with the main focus being on Social Anxiety. It is our hypothesis that interactions between people and avatars can evoke in people behaviours that correspond to their degree of social anxiety or confidence. Moreover the responses of people to avatars will also depend on their degree of exhibited social anxiety – they will react differently to a shy avatar compared to a confident avatar. The research started with an experimental study on the reaction of shy and confident male volunteers to an approach by an attractive and friendly virtual woman in a VE. The results show that the participants responded according to expectations towards the avatar at an emotional, physiological, and behavioural level. The research then studied a particular cue which represents shyness – “blushing”. Experiments were carried out on how participant responds towards a blushing avatar. The results suggested that, even without consciously noticing the avatar’s blushing, the participants had an improved relationship with her when she was blushing. Finally, the research further investigated how people respond towards a shy avatar as opposed to a confident one. The results show that participants gave more positive comments to the personality of the avatar displaying signs of shyness

    Psychophysical investigation of facial expressions using computer animated faces

    No full text
    The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness

    Easy Generation of Facial Animation Using Motion Graphs

    Get PDF
    Facial animation is a time-consuming and cumbersome task that requires years of experience and/or a complex and expensive set-up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video-games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean-based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video-game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.<br/

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    • 

    corecore