3,910 research outputs found

    Automatic modeling of virtual humans and body clothing

    Get PDF
    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewe

    Representing and Parameterizing Agent Behaviors

    Get PDF
    The last few years have seen great maturation in understanding how to use computer graphics technology to portray 3D embodied characters or virtual humans. Unlike the off-line, animator-intensive methods used in the special effects industry, real-time embodied agents are expected to exist and interact with us live . They can be represent other people or function as autonomous helpers, teammates, or tutors enabling novel interactive educational and training applications. We should be able to interact and communicate with them through modalities we already use, such as language, facial expressions, and gesture. Various aspects and issues in real-time virtual humans will be discussed, including consistent parameterizations for gesture and facial actions using movement observation principles, and the representational basis for character believability, personality, and affect. We also describe a Parameterized Action Representation (PAR) that allows an agent to act, plan, and reason about its actions or actions of others. Besides embodying the semantics of human action, the PAR is designed for building future behaviors into autonomous agents and controlling the animation parameters that portray personality, mood, and affect in an embodied agent

    Facial actions as visual cues for personality

    Get PDF
    What visual cues do human viewers use to assign personality characteristics to animated characters? While most facial animation systems associate facial actions to limited emotional states or speech content, the present paper explores the above question by relating the perception of personality to a wide variety of facial actions (e.g., head tilting/turning, and eyebrow raising) and emotional expressions (e.g., smiles and frowns). Animated characters exhibiting these actions and expressions were presented to human viewers in brief videos. Human viewers rated the personalities of these characters using a well-standardized adjective rating system borrowed from the psychological literature. These personality descriptors are organized in a multidimensional space that is based on the orthogonal dimensions of Desire for Affiliation and Displays of Social Dominance. The main result of the personality rating data was that human viewers associated individual facial actions and emotional expressions with specific personality characteristics very reliably. In particular, dynamic facial actions such as head tilting and gaze aversion tended to spread ratings along the Dominance dimension, whereas facial expressions of contempt and smiling tended to spread ratings along the Affiliation dimension. Furthermore, increasing the frequency and intensity of the head actions increased the perceived Social Dominance of the characters. We interpret these results as pointing to a reliable link between animated facial actions/expressions and the personality attributions they evoke in human viewers. The paper shows how these findings are used in our facial animation system to create perceptually valid personality profiles based on Dominance and Affiliation as two parameters that control the facial actions of autonomous animated characters

    Films, Affective Computing and Aesthetic Experience: Identifying Emotional and Aesthetic Highlights from Multimodal Signals in a Social Setting.

    Get PDF
    Over the last years, affective computing has been strengthening its ties with the humanities, exploring and building understanding of people’s responses to specific artistic multimedia stimuli. “Aesthetic experience” is acknowledged to be the subjective part of some artistic exposure, namely, the inner affective state of a person exposed to some artistic object. In this work, we describe ongoing research activities for studying the aesthetic experience of people when exposed to movie artistic stimuli. To do so, this work is focused on the definition of emotional and aesthetic highlights in movies and studies the people responses to them using physiological and behavioral signals, in a social setting. In order to examine the suitability of multimodal signals for detecting highlights, we initially evaluate a supervised highlight detection system. Further, in order to provide an insight on the reactions of people, in a social setting, during emotional and aesthetic highlights, we study two unsupervised systems. Those systems are able to (a) measure the distance among the captured signals of multiple people using the dynamic time warping algorithm and (b) create a reaction profile for a group of people that would be indicative of whether that group reacts or not at a given time. The results indicate that the proposed systems are suitable for detecting highlights in movies and capturing some form of social interactions across different movie genres. Moreover, similar social interactions during exposure to emotional and some types of aesthetic highlights, such as those corresponding to technical or lightening choices of the director, can be observed. The utilization of electrodermal activity measurements yields in better performances than those achieved when using acceleration measurements, whereas fusion of the modalities does not appear to be beneficial for the majority of the cases

    Computer Science at the University of Helsinki 1998

    Get PDF

    Web-Based Dynamic Paintings: Real-Time Interactive Artworks in Web Using a 2.5D Pipeline

    Full text link
    In this work, we present a 2.5D pipeline approach to creating dynamic paintings that can be re-rendered interactively in real-time on the Web. Using this 2.5D approach, any existing simple painting such as portraits can be turned into an interactive dynamic web-based artwork. Our interactive system provides most global illumination effects such as reflection, refraction, shadow, and subsurface scattering by processing images. In our system, the scene is defined only by a set of images. These include (1) a shape image, (2) two diffuse images, (3) a background image, (4) one foreground image, and (5) one transparency image. A shape image is either a normal map or a height. Two diffuse images are usually hand-painted. They are interpolated using illumination information. The transparency image is used to define the transparent and reflective regions that can reflect the foreground image and refract the background image, both of which are also hand-drawn. This framework, which mainly uses hand-drawn images, provides qualitatively convincing painterly global illumination effects such as reflection and refraction. We also include parameters to provide additional artistic controls. For instance, using our piecewise linear Fresnel function, it is possible to control the ratio of reflection and refraction. This system is the result of a long line of research contributions. On the other hand, the art-directed Fresnel function that provides physically plausible compositing of reflection and refraction with artistic control is completely new. Art-directed warping equations that provide qualitatively convincing refraction and reflection effects with linearized artistic control are also new. You can try our web-based system for interactive dynamic real-time paintings at http://mock3d.tamu.edu/.Comment: 22 page

    Support Vector Machines for Anatomical Joint Constraint Modelling

    Get PDF
    The accurate simulation of anatomical joint models is becoming increasingly important for both realistic animation and diagnostic medical applications. Recent models have exploited unit quaternions to eliminate singularities when modeling orientations between limbs at a joint. This has led to the development of quaternion based joint constraint validation and correction methods. In this paper a novel method for implicitly modeling unit quaternion joint constraints using Support Vector Machines (SVMs) is proposed which attempts to address the limitations of current constraint validation approaches. Initial results show that the resulting SVMs are capable of modeling regular spherical constraints on the rotation of the limb

    Three dimensional facial model adaptation

    Full text link
    This paper addresses the problem of adapting a generic 3D face model to a human face of which the frontal and profile views are given. Assuming that a set of feature points have been detected on both views the adaptation procedure initializes with a rigid transformation of the model aiming to minimize the distances of the 3D model feature nodes from the calculated 3D coordinates of the 2D feature points. Then, a non-rigid transformation ensures that the feature nodes are displaced optimally close to their exact calculated positions, dragging their neighbors in a way that does not deform the facial model in an unnatural way
    • …
    corecore