1,996 research outputs found

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Can gender categorization influence the perception of animated virtual humans?

    Full text link
    Animations have become increasingly realistic with the evolution of Computer Graphics (CG). In particular, human models and behaviors were represented through animated virtual humans, sometimes with a high level of realism. In particular, gender is a characteristic that is related to human identification, so that virtual humans assigned to a specific gender have, in general, stereotyped representations through movements, clothes, hair and colors, in order to be understood by users as desired by designers. An important area of study is finding out whether participants' perceptions change depending on how a virtual human is visually presented. Findings in this area can help the industry to guide the modeling and animation of virtual humans to deliver the expected impact to the audience. In this paper, we reproduce, through CG, a perceptual study that aims to assess gender bias in relation to a simulated baby. In the original study, two groups of people watched the same video of a baby reacting to the same stimuli, but one group was told the baby was female and the other group was told the same baby was male, producing different perceptions. The results of our study with virtual babies were similar to the findings with real babies. First, it shows that people's emotional response change depending on the character gender attribute, in this case the only difference was the baby's name. Our research indicates that by just informing the name of a virtual human can be enough to create a gender perception that impact the participant emotional answer.Comment: 8 pages, 1 figure, 2 table

    What is So Special About Contemporary CG Faces? Semiotics of MetaHumans

    Get PDF
    This paper analyses the features of the 2021 software for the creation of ultrarealistic digital characters “MetaHuman Creator” and reflects on the causes of such perceived effect of realism to understand if the faces produced with such software represent an actual novelty from an academic standpoint. Such realism is first of all defined as the result of semio-cognitive processes which trigger interpretative habits specifically related to faces. These habits are then related to the main properties of any realistic face: being face-looking, face-meaning and face-acting. These properties, in turn, are put in relation with our interactions with faces in terms of face detection, face recognition, face reading and face agency. Within this theoretical framework, we relate the characteristics of these artificial faces with such interpretative habits. To do so, we first of all make an examination of the technological features behind both the software and the digital faces it produces. This analysis highlights four main points of interest: the mathematical accuracy, the scanned database, the high level of details and the transformative capacities of these artificial faces. We then relate these characteristics with the cultural and cognitive aspects involved in recognizing and granting meaning to faces. This reveals how metahuman faces differs from previous artificial faces in terms of indexicality, intersubjectivity, informativity and irreducibility. But it also reveals some limits of such effect of reality in terms of intentionality and historical context. This examination consequently brings us to conclude that metahuman faces are qualitatively different from previous artificial faces and, in the light of their potentials and limits, to highlight four main lines of future research based on our findings

    Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces

    Get PDF
    Due to varied personal, social, or even cultural situations, people sometimes conceal or mask their true emotions. These suppressed emotions can be expressed in a very subtle way by brief movements called microexpressions. We investigate human subjects’ perception of hidden emotions in virtual faces, inspired by recent psychological experiments. We created animations with virtual faces showing some facial expressions and inserted brief secondary expressions in some sequences, in order to try to convey a subtle second emotion in the character. Our evaluation methodology consists of two sets of experiments, with three different sets of questions. The first experiment verifies that the accuracy and concordance of the participant’s responses with synthetic faces matches the empirical results done with photos of real people in the paper by X.-b. Shen, Q. Wu, and X.-l. Fu, 2012, “Effects of the duration of expressions on the recognition of microexpressions,” Journal of Zhejiang University Science B, 13(3), 221–230. The second experiment verifies whether participants could perceive and identify primary and secondary emotions in virtual faces. The third experiment tries to evaluate the participant’s perception of realism, deceit, and valence of the emotions. Our results show that most of the participants recognized the foreground (macro) emotion and most of the time they perceived the presence of the second (micro) emotion in the animations, although they did not identify it correctly in some samples. This experiment exposes the benefits of conveying microexpressions in computer graphics characters, as they may visually enhance a character’s emotional depth through subliminal microexpression cues, and consequently increase the perceived social complexity and believabilit

    Virtual humans: thirty years of research, what next?

    Get PDF
    In this paper, we present research results and future challenges in creating realistic and believable Virtual Humans. To realize these modeling goals, real-time realistic representation is essential, but we also need interactive and perceptive Virtual Humans to populate the Virtual Worlds. Three levels of modeling should be considered to create these believable Virtual Humans: 1) realistic appearance modeling, 2) realistic, smooth and flexible motion modeling, and 3) realistic high-level behaviors modeling. At first, the issues of creating virtual humans with better skeleton and realistic deformable bodies are illustrated. To give a level of believable behavior, challenges are laid on generating on the fly flexible motion and complex behaviours of Virtual Humans inside their environments using a realistic perception of the environment. Interactivity and group behaviours are also important parameters to create believable Virtual Humans which have challenges in creating believable relationship between real and virtual humans based on emotion and personality, and simulating realistic and believable behaviors of groups and crowds. Finally, issues in generating realistic virtual clothed and haired people are presente

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained
    • 

    corecore