134 research outputs found

    Evaluating Perceived Trust From Procedurally Animated Gaze

    Get PDF
    Adventure role playing games (RPGs) provide players with increasingly expansive worlds, compelling storylines, and meaningful fictional character interactions. Despite the fast-growing richness of these worlds, the majority of interactions between the player and non-player characters (NPCs) still remain scripted. In this paper we propose using an NPC’s animations to reflect how they feel towards the player and as a proof of concept, investigate the potential for a straightforward gaze model to convey trust. Through two perceptual experiments, we find that viewers can distinguish between high and low trust animations, that viewers associate the gaze differences specifically with trust and not with an unrelated attitude (aggression), and that the effect can hold for different facial expressions and scene contexts, even when viewed by participants for a short (five second) clip length. With an additional experiment, we explore the extent that trust is uniquely conveyed over other attitudes associated with gaze, such as interest, unfriendliness, and admiration

    Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces

    Get PDF
    Due to varied personal, social, or even cultural situations, people sometimes conceal or mask their true emotions. These suppressed emotions can be expressed in a very subtle way by brief movements called microexpressions. We investigate human subjects’ perception of hidden emotions in virtual faces, inspired by recent psychological experiments. We created animations with virtual faces showing some facial expressions and inserted brief secondary expressions in some sequences, in order to try to convey a subtle second emotion in the character. Our evaluation methodology consists of two sets of experiments, with three different sets of questions. The first experiment verifies that the accuracy and concordance of the participant’s responses with synthetic faces matches the empirical results done with photos of real people in the paper by X.-b. Shen, Q. Wu, and X.-l. Fu, 2012, “Effects of the duration of expressions on the recognition of microexpressions,” Journal of Zhejiang University Science B, 13(3), 221–230. The second experiment verifies whether participants could perceive and identify primary and secondary emotions in virtual faces. The third experiment tries to evaluate the participant’s perception of realism, deceit, and valence of the emotions. Our results show that most of the participants recognized the foreground (macro) emotion and most of the time they perceived the presence of the second (micro) emotion in the animations, although they did not identify it correctly in some samples. This experiment exposes the benefits of conveying microexpressions in computer graphics characters, as they may visually enhance a character’s emotional depth through subliminal microexpression cues, and consequently increase the perceived social complexity and believabilit

    Human Model Reaching, Grasping, Looking and Sitting Using Smart Objects

    Get PDF
    Manually creating convincing animated human motion in a 3D ergonomic test environment is tedious and time consuming. However, procedural motion generators help animators efficiently produce complex and realistic motions. Using the concept of a Human Modeling Software Testbed (HMST), we created novel procedural methods for animating reaching, grasping, looking, and sitting using the environmental context of ‘smart’ objects that parametrically guide human model ergonomic motions. This approach enabled complicated procedures such as collision-free leg reach and contextual sitting motion generation. By procedurally adding small secondary details to the animation, such as head/eye vision constraints and prehensile grasps, the animated motions look more natural with minimal animator input. A ‘smart’ object in the scene graph provides specific parameters to produce proper motions and final positions. These parameters are applied to the desired figure procedurally to create any secondary motions, and further generalize to any environment. Our system allows users to proceed with any required ergonomic analyses with confidence in the visual validity of the automated motions

    Anthropometry for Computer Graphics Human Figures

    Get PDF
    Anthropometry as it applies to Computer Graphics is examined in this report which documents the Anthropometry work done in the Computer Graphics Research Laboratory at the University of Pennsylvania from 1986 to 1988. A detailed description of the basis for this work is given along with examples of the variability of computer graphics human figures resulting from this work. Also discussed is the unique and versatile user interface developed to allow easy manipulation of the data used to describe the anthropometric parameters required to define human figure models. The many appendices contain the specifics of our models as well as much of the data used to define the models

    Simulated Casualties and Medics for Emergency Training

    Get PDF
    The MediSim system extends virtual environment technology to allow medical personnel to interact with and train on simulated casualties. The casualty model employs a three-dimensional animated human body that displays appropriate physical and behavioral responses to injury and/or treatment. Medical corpsmen behaviors were developed to allow the actions of simulated medical personnel to conform to both military practice and medical protocols during patient assessment and stabilization. A trainee may initiate medic actions through a mouse and menu interface; a VR interface has also been created by Stansfield\u27s research group at Sandia National Labs

    Kinematics and dynamics for computer animation

    Get PDF
    This tutorial will focus on the physical principles of kinematics and dynamics. After explaining the basic equations for point masses and rigid bodies a new approach for the dynamic simulation of multi-linked models with wobbling mass is presented, which has led to new insight in the field of biomechanics, but which has not been used in computer animation so far
    • 

    corecore