3,918 research outputs found
Facial actions as visual cues for personality
What visual cues do human viewers use to assign personality characteristics to animated characters?
While most facial animation systems associate facial actions to limited emotional states or speech content,
the present paper explores the above question by relating the perception of personality to a wide variety of
facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and
frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in
brief videos. Human viewers rated the personalities of these characters using a well-standardized adjective
rating system borrowed from the psychological literature. These personality descriptors are organized in a
multidimensional space that is based on the orthogonal dimensions of Desire for Affiliation and Displays of
Social Dominance. The main result of the personality rating data was that human viewers associated
individual facial actions and emotional expressions with specific personality characteristics very reliably. In
particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the
Dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along
the Affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased
the perceived Social Dominance of the characters. We interpret these results as pointing to a reliable link
between animated facial actions/expressions and the personality attributions they evoke in human viewers.
The paper shows how these findings are used in our facial animation system to create perceptually valid
personality profiles based on Dominance and Affiliation as two parameters that control the facial actions of
autonomous animated characters
Multispace behavioral model for face-based affective social agents
This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces:
knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature
level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional
states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates
the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry
space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the
triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios
Advances in Human-Robot Interaction
Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
Context-based multimodal interpretation : an integrated approach to multimodal fusion and discourse processing
This thesis is concerned with the context-based interpretation of verbal and nonverbal contributions to interactions in multimodal multiparty dialogue systems. On the basis of a detailed analysis of context-dependent multimodal discourse phenomena, a comprehensive context model is developed. This context model supports the resolution of a variety of referring and elliptical expressions as well as the processing and reactive generation of turn-taking signals and the identification of the intended addressee(s) of a contribution. A major goal of this thesis is the development of a generic component for multimodal fusion and discourse processing. Based on the integration of this component into three distinct multimodal dialogue systems, the generic applicability of the approach is shown.Diese Dissertation befasst sich mit der kontextbasierten Interpretation von verbalen und nonverbalen Gesprächsbeiträgen im Rahmen von multimodalen Dialogsystemen. Im Rahmen dieser Arbeit wird, basierend auf einer detaillierten Analyse multimodaler Diskursphänomene, ein umfassendes Modell des Gesprächskontextes erarbeitet. Dieses Modell soll sowohl die Verarbeitung einer Vielzahl von referentiellen und elliptischen Ausdrücken, als auch die Erzeugung reaktiver Aktionen wie sie für den Sprecherwechsel benötigt werden unterstützen. Ein zentrales Ziel dieser Arbeit ist die Entwicklung einer generischen Komponente zur multimodalen Fusion und Diskursverarbeitung. Anhand der Integration dieser Komponente in drei unterschiedliche Dialogsysteme soll der generische Charakter dieser Komponente gezeigt werden
- …