1,335 research outputs found

    Facial actions as visual cues for personality

    Get PDF
    What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in brief videos. Human viewers rated the personalities of these characters using a well-standardized adjective rating system borrowed from the psychological literature. These personality descriptors are organized in a multidimensional space that is based on the orthogonal dimensions of Desire for Affiliation and Displays of Social Dominance. The main result of the personality rating data was that human viewers associated individual facial actions and emotional expressions with specific personality characteristics very reliably. In particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the Dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along the Affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased the perceived Social Dominance of the characters. We interpret these results as pointing to a reliable link between animated facial actions/expressions and the personality attributions they evoke in human viewers. The paper shows how these findings are used in our facial animation system to create perceptually valid personality profiles based on Dominance and Affiliation as two parameters that control the facial actions of autonomous animated characters

    Multispace behavioral model for face-based affective social agents

    Get PDF
    This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios

    Advancements in AI-driven multilingual comprehension for social robot interactions: An extensive review

    Get PDF
    In the digital era, human-robot interaction is rapidly expanding, emphasizing the need for social robots to fluently understand and communicate in multiple languages. It is not merely about decoding words but about establishing connections and building trust. However, many current social robots are limited to popular languages, serving in fields like language teaching, healthcare and companionship. This review examines the AI-driven language abilities in social robots, providing a detailed overview of their applications and the challenges faced, from nuanced linguistic understanding to data quality and cultural adaptability. Last, we discuss the future of integrating advanced language models in robots to move beyond basic interactions and towards deeper emotional connections. Through this endeavor, we hope to provide a beacon for researchers, steering them towards a path where linguistic adeptness in robots is seamlessly melded with their capacity for genuine emotional engagement

    The Last Decade of HCI Research on Children and Voice-based Conversational Agents

    Get PDF
    Voice-based Conversational Agents (CAs) are increasingly being used by children. Through a review of 38 research papers, this work maps trends, themes, and methods of empirical research on children and CAs in HCI research over the last decade. A thematic analysis of the research found that work in this domain focuses on seven key topics: ascribing human-like qualities to CAs, CAs’ support of children’s learning, the use and role of CAs in the home and family context, CAs’ support of children’s play, children’s storytelling with CA, issues concerning the collection of information revealed by CAs, and CAs designed for children with differing abilities. Based on our findings, we identify the needs to account for children's intersectional identities and linguistic and cultural diversity and theories from multiple disciples in the design of CAs, develop heuristics for child-centric interaction with CAs, to investigate implications of CAs on social cognition and interpersonal relationships, and to examine and design for multi-party interactions with CAs for different domains and contexts
    • …
    corecore