3,267 research outputs found

    Cultural dialects of real and synthetic emotional facial expressions

    Get PDF
    In this article we discuss the aspects of designing facial expressions for virtual humans (VHs) with a specific culture. First we explore the notion of cultures and its relevance for applications with a VH. Then we give a general scheme of designing emotional facial expressions, and identify the stages where a human is involved, either as a real person with some specific role, or as a VH displaying facial expressions. We discuss how the display and the emotional meaning of facial expressions may be measured in objective ways, and how the culture of displayers and the judges may influence the process of analyzing human facial expressions and evaluating synthesized ones. We review psychological experiments on cross-cultural perception of emotional facial expressions. By identifying the culturally critical issues of data collection and interpretation with both real and VHs, we aim at providing a methodological reference and inspiration for further research

    Preface to Computational Humor 2012

    Get PDF
    Like its predecessors in 1996 (University of Twente, the Netherlands) and 2002 (ITC-irst, Trento, Italy), this Third International Workshop on Computational Humor (IWCH 2012) focusses on the possibility to find algorithms that allow understanding and generation of humor. There is the general aim of modeling humor, and if we can do that, it will provide us with lots of information about our cognitive abilities in general, such as reasoning, remembering, understanding situations, and understanding conversational partners. But it also provides us with information about being creative, making associations, storytelling and language use. Many more subtleties in face-to-face and multiparty interaction can be added, such as using humor to persuade and dominate, to soften or avoid a face threatening act, to ease a tense situation or to establish a friendly or romantic relationship. One issue to consider is: when is a humorous act appropriate

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    Teaching Virtual Characters to use Body Language

    Get PDF
    Non-verbal communication, or ā€œbody languageā€, is a critical component in constructing believable virtual characters. Most often, body language is implemented by a set of ad-hoc rules.We propose a new method for authors to specify and refine their characterā€™s body-language responses. Using our method, the author watches the character acting in a situation, and provides simple feedback on-line. The character then learns to use its body language to maximize the rewards, based on a reinforcement learning algorithm

    A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities

    Full text link
    Embodied avatars as virtual agents have many applications and provide benefits over disembodied agents, allowing non-verbal social and interactional cues to be leveraged, in a similar manner to how humans interact with each other. We present an open embodied avatar built upon the Unreal Engine that can be controlled via a simple python programming interface. The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). GITHUB link: https://github.com/danmcduff/AvatarSimComment: International Conference on Multimodal Interaction (ICMI 2019

    Preface

    Get PDF

    Artifical Intelligence for Human Computing

    Get PDF
    This book constitutes the thoroughly refereed post-proceedings of two events discussing AI for Human Computing: one Special Session during the Eighth International ACM Conference on Multimodal Interfaces (ICMI 2006), held in Banff, Canada, in November 2006, and a Workshop organized in conjunction with the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), held in Hyderabad, India, in January 2007. A large number of the contributions in this state-of-the-art survey are updated and extended versions of the papers presented during these two events. In order to obtain a more complete overview of research efforts in the field of human computing, a number of additional invited contributions are also included in this book on AI for human computing. The 17 revised papers presented were carefully selected from numerous submissions to and presentations made at the two events and include invited articles to round off coverage of all relevant topics of the emerging topic. The papers are organized in three parts: a part on foundational issues of human computing, a part on sensing humans and their activities, and a part on anthropocentric interaction models
    • ā€¦
    corecore