4,171 research outputs found

    Automatic modeling of virtual humans and body clothing

    Get PDF
    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewe

    Representing and Parameterizing Agent Behaviors

    Get PDF
    The last few years have seen great maturation in understanding how to use computer graphics technology to portray 3D embodied characters or virtual humans. Unlike the off-line, animator-intensive methods used in the special effects industry, real-time embodied agents are expected to exist and interact with us live . They can be represent other people or function as autonomous helpers, teammates, or tutors enabling novel interactive educational and training applications. We should be able to interact and communicate with them through modalities we already use, such as language, facial expressions, and gesture. Various aspects and issues in real-time virtual humans will be discussed, including consistent parameterizations for gesture and facial actions using movement observation principles, and the representational basis for character believability, personality, and affect. We also describe a Parameterized Action Representation (PAR) that allows an agent to act, plan, and reason about its actions or actions of others. Besides embodying the semantics of human action, the PAR is designed for building future behaviors into autonomous agents and controlling the animation parameters that portray personality, mood, and affect in an embodied agent

    Exploiting visual salience for the generation of referring expressions

    Get PDF
    In this paper we present a novel approach to generating referring expressions (GRE) that is tailored to a model of the visual context the user is attending to. The approach integrates a new computational model of visual salience in simulated 3-D environments with Dale and Reiter’s (1995) Incremental Algorithm. The advantage of our GRE framework are: (1) the context set used by the GRE algorithm is dynamically computed by the visual saliency algorithm as a user navigates through a simulation; (2) the integration of visual salience into the generation process means that in some instances underspecified but sufficiently detailed descriptions of the target object are generated that are shorter than those generated by GRE algorithms which focus purely on adjectival and type attributes; (3) the integration of visual saliency into the generation process means that our GRE algorithm will in some instances succeed in generating a description of the target object in situations where GRE algorithms which focus purely on adjectival and type attributes fail

    Facial actions as visual cues for personality

    Get PDF
    What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in brief videos. Human viewers rated the personalities of these characters using a well-standardized adjective rating system borrowed from the psychological literature. These personality descriptors are organized in a multidimensional space that is based on the orthogonal dimensions of Desire for Affiliation and Displays of Social Dominance. The main result of the personality rating data was that human viewers associated individual facial actions and emotional expressions with specific personality characteristics very reliably. In particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the Dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along the Affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased the perceived Social Dominance of the characters. We interpret these results as pointing to a reliable link between animated facial actions/expressions and the personality attributions they evoke in human viewers. The paper shows how these findings are used in our facial animation system to create perceptually valid personality profiles based on Dominance and Affiliation as two parameters that control the facial actions of autonomous animated characters

    Support Vector Machines for Anatomical Joint Constraint Modelling

    Get PDF
    The accurate simulation of anatomical joint models is becoming increasingly important for both realistic animation and diagnostic medical applications. Recent models have exploited unit quaternions to eliminate singularities when modeling orientations between limbs at a joint. This has led to the development of quaternion based joint constraint validation and correction methods. In this paper a novel method for implicitly modeling unit quaternion joint constraints using Support Vector Machines (SVMs) is proposed which attempts to address the limitations of current constraint validation approaches. Initial results show that the resulting SVMs are capable of modeling regular spherical constraints on the rotation of the limb

    Device-based decision-making for adaptation of three-dimensional content

    Get PDF
    The goal of this research was the creation of an adaptation mechanism for the delivery of three-dimensional content. The adaptation of content, for various network and terminal capabilities - as well as for different user preferences, is a key feature that needs to be investigated. Current state-of-the art research of the adaptation shows promising results for specific tasks and limited types of content, but is still not well-suited for massive heterogeneous environments. In this research, we present a method for transmitting adapted three-dimensional content to multiple target devices. This paper presents some theoretical and practical methods for adapting three-dimensional content, which includes shapes and animation. We also discuss practical details of the integration of our methods into MPEG-21 and MPEG-4 architecture

    Subjective Quality Assessment of the Impact of Buffer Size in Fine-Grain Parallel Video Encoding

    Get PDF
    Fine-Grain parallelism is essential for real-time video encoding performance. This usually implies setting a fixed buffer size for each encoded block. The choice of this parameter is critical for both performance and hardware cost. In this paper we analyze the impact of buffer size on image subjective quality, and its relation with other encoding parameters. We explore the consequences on visual quality, when minimizing buffer size to the point of causing the discard of quantized coefficients for highest frequencies. Finally, we propose some guidelines for the choice of buffer size, that has proven to be heavily dependent, in addition to other parameters, on the type of sequence being encoded. These guidelines are useful for the design of efficient realtime encoders, both hardware and software

    Evolved Topology Generalized Multi-layer Perceptron (GMLP) for Anatomical Joint Constraint Modelling

    Get PDF
    The accurate simulation of anatomical joint models is becoming increasingly important for both medical diagnosis and realistic animation applications. Quaternion algebra has been increasingly applied to model rotations providing a compact representation while avoiding singularities. We propose the use of Artificial Neural Networks to accurately simulate joint constraints based on recorded data. This paper describes the application of Genetic Algorithm approaches to neural network training in order to model corrective piece-wise linear / discontinuous functions required to maintain valid joint configurations. The results show that artificial Neural Networks are capable of modeling constraints on the rotation of and around a virtual limb
    corecore