766 research outputs found

    Animating Virtual Human for Virtual Batik Modeling

    Get PDF
    This research paper describes a development of animating virtual human for virtual batik modeling project. The objectives of this project are to animate the virtual human, to map the cloth with the virtual human body, to present the batik cloth, and to evaluate the application in terms of realism of virtual human look, realism of virtual human movement, realism of 3D scene, application suitability, application usability, fashion suitability and user acceptance. The final goal is to accomplish an animated virtual human for virtual batik modeling. There are 3 essential phases which research and analysis (data collection of modeling and animating technique), development (model and animate virtual human, map cloth to body and add a music) and evaluation (evaluation of realism of virtual human look, realism of virtual human movement, realism of props, application suitability, application usability, fashion suitability and user acceptance). The result for application usability is the highest percentage which 90%. Result show that this application is useful to the people. In conclusion, this project has met the objective, which the realism is achieved by used a suitable technique for modeling and animating

    Virtual humans and Photorealism: The effect of photorealism of interactive virtual humans in clinical virtual environment on affective responses

    Get PDF
    The ability of realistic vs stylized representations of virtual characters to elicit emotions in users has been an open question for researchers and artists alike. We designed and performed a between subjects experiment using a medical virtual reality simulation to study the differences in the emotions aroused in participants while interacting with realistic and stylized virtual characters. The experiment included three conditions each of which presented a different representation of the virtual character namely; photo-realistic, non-photorealistic cartoon-shaded and non-photorealistic charcoal-sketch. The simulation used for the experiment, called the Rapid Response Training System was developed to train nurses to identify symptoms of rapid deterioration in patients. The emotional impact of interacting with the simulation on the participants was measured via both subjective and objective metrics. Quantitative objective measures were gathered using skin Electrodermal Activity (EDA) sensors, and quantitative subjective measures included Differential Emotion Survey (DES IV), Positive and Negative Affect Schedule (PANAS), and the co-presence or social presence questionnaire. The emotional state of the participants was analyzed across four distinct time steps during which the medical condition of the virtual patient deteriorated, and was contrasted to a baseline affective state. The data from the EDA sensors indicated that the mean level of arousal was highest in the charcoal-sketch condition, lowest in the realistic condition, with responses in the cartoon-shaded condition was in the middle. Mean arousal responses also seemed to be consistent in both the cartoon-shaded and charcoal-sketch conditions across all time steps, while the mean arousal response of participants in the realistic condition showed a significant drop from time step 1 through time step 2, corresponding to the deterioration of the virtual patient. Mean scores of participants in the DES survey seems to suggest that participants in the realistic condition elicited a higher emotional response than participants in both non-realistic conditions. Within the non-realistic conditions, participants in the cartoon-shaded condition seemed to elicit a higher emotional response than those in the charcoal-sketch condition

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Senescence: An Aging based Character Simulation Framework

    Get PDF
    The \u27Senescence\u27 framework is a character simulation plug-in for Maya that can be used for rigging and skinning muscle deformer based humanoid characters with support for aging. The framework was developed using Python, Maya Embedded Language and PyQt. The main targeted users for this framework are the Character Technical Directors, Technical Artists, Riggers and Animators from the production pipeline of Visual Effects Studios. The characters that were simulated using \u27Senescence\u27 were studied using a survey to understand how well the intended age was perceived by the audience. The results of the survey could not reject one of our null hypotheses which means that the difference in the simulated age groups of the character is not perceived well by the participants. But, there is a difference in the perception of simulated age in the character between an Animator and a Non-Animator. Therefore, the difference in the simulated character\u27s age was perceived by an untrained audience, but the audience was unable to relate it to a specific age group

    Laughter and smiling facial expression modelling for the generation of virtual affective behavior

    Get PDF
    Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance. © 2021 Mascaró et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Life-Sized Audiovisual Spatial Social Scenes with Multiple Characters: MARC & SMART-I²

    No full text
    International audienceWith the increasing use of virtual characters in virtual and mixed reality settings, the coordination of realism in audiovisual rendering and expressive virtual characters becomes a key issue. In this paper we introduce a new system combining two systems for tackling the issue of realism and high quality in audiovisual rendering and life-sized expressive characters. The goal of the resulting SMART-MARC platform is to investigate the impact of realism on multiple levels: spatial audiovisual rendering of a scene, appearance and expressive behaviors of virtual characters. Potential interactive applications include mediated communication in virtual worlds, therapy, game, arts and elearning. Future experimental studies will focus on 3D audio/visual coherence, social perception and ecologically valid interaction scenes

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    Get PDF
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Real-time simulation and visualisation of cloth using edge-based adaptive meshes

    Get PDF
    Real-time rendering and the animation of realistic virtual environments and characters has progressed at a great pace, following advances in computer graphics hardware in the last decade. The role of cloth simulation is becoming ever more important in the quest to improve the realism of virtual environments. The real-time simulation of cloth and clothing is important for many applications such as virtual reality, crowd simulation, games and software for online clothes shopping. A large number of polygons are necessary to depict the highly exible nature of cloth with wrinkling and frequent changes in its curvature. In combination with the physical calculations which model the deformations, the effort required to simulate cloth in detail is very computationally expensive resulting in much diffculty for its realistic simulation at interactive frame rates. Real-time cloth simulations can lack quality and realism compared to their offline counterparts, since coarse meshes must often be employed for performance reasons. The focus of this thesis is to develop techniques to allow the real-time simulation of realistic cloth and clothing. Adaptive meshes have previously been developed to act as a bridge between low and high polygon meshes, aiming to adaptively exploit variations in the shape of the cloth. The mesh complexity is dynamically increased or refined to balance quality against computational cost during a simulation. A limitation of many approaches is they do not often consider the decimation or coarsening of previously refined areas, or otherwise are not fast enough for real-time applications. A novel edge-based adaptive mesh is developed for the fast incremental refinement and coarsening of a triangular mesh. A mass-spring network is integrated into the mesh permitting the real-time adaptive simulation of cloth, and techniques are developed for the simulation of clothing on an animated character
    corecore