2,442 research outputs found

    Fully Automatic Expression-Invariant Face Correspondence

    Full text link
    We consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions. Our fully automatic approach does not require any manually placed markers on the scan. Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan. The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan. To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model. Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions. Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models

    Senescence: An Aging based Character Simulation Framework

    Get PDF
    The \u27Senescence\u27 framework is a character simulation plug-in for Maya that can be used for rigging and skinning muscle deformer based humanoid characters with support for aging. The framework was developed using Python, Maya Embedded Language and PyQt. The main targeted users for this framework are the Character Technical Directors, Technical Artists, Riggers and Animators from the production pipeline of Visual Effects Studios. The characters that were simulated using \u27Senescence\u27 were studied using a survey to understand how well the intended age was perceived by the audience. The results of the survey could not reject one of our null hypotheses which means that the difference in the simulated age groups of the character is not perceived well by the participants. But, there is a difference in the perception of simulated age in the character between an Animator and a Non-Animator. Therefore, the difference in the simulated character\u27s age was perceived by an untrained audience, but the audience was unable to relate it to a specific age group

    A tutorial on motion capture driven character animation

    Get PDF
    Motion capture (MoCap) is an increasingly important technique to create realistic human motion for animation. However MoCap data are noisy, the resulting animation is often inaccurate and unrealistic without elaborate manual processing of the data. In this paper, we will discuss practical issues for MoCap driven character animation, particularly when using commercial toolkits. We highlight open topics in this field for future research. MoCap animations created in this project will be demonstrated at the conference

    Easy Rigging of Face by Automatic Registration and Transfer of Skinning Parameters

    Full text link
    International audiencePreparing a facial mesh to be animated requires a laborious manualrigging process. The rig specifies how the input animation datadeforms the surface and allows artists to manipulate a character.We present a method that automatically rigs a facial mesh based onRadial Basis Functions and linear blend skinning approach.Our approach transfers the skinning parameters (feature points andtheir envelopes, ie. point-vertex weights),of a reference facial mesh (source) - already rigged - tothe chosen facial mesh (target) by computing an automaticregistration between the two meshes.There is no need to manually mark the correspondence between thesource and target mesh.As a result, inexperienced artists can automatically rig facial meshes and startright away animating their 3D characters, driven for instanceby motion capture data

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    A Survey of Computer Graphics Facial Animation Methods: Comparing Traditional Approaches to Machine Learning Methods

    Get PDF
    Human communications rely on facial expression to denote mood, sentiment, and intent. Realistic facial animation of computer graphic models of human faces can be difficult to achieve as a result of the many details that must be approximated in generating believable facial expressions. Many theoretical approaches have been researched and implemented to create more and more accurate animations that can effectively portray human emotions. Even though many of these approaches are able to generate realistic looking expressions, they typically require a lot of artistic intervention to achieve a believable result. To reduce the intervention needed to create realistic facial animation, new approaches that utilize machine learning are being researched to reduce the amount of effort needed to generate believable facial animations. This survey paper summarizes over 20 research papers related to facial animation and compares the traditional animation approaches to newer machine learning methods as well as highlights the strengths, weaknesses, and use cases of each different approach

    The Opportunity: A 3D Animation About Karma

    Get PDF
    The Opportunity is a three dimensional animation that illustrates the idea that by helping others you can inadvertently help yourself. The story is told using an animated sunflower, bees, tulip, and farmer. In a sunflower garden, a sunflower tries to entice a bee to pollinate it, but a tulip (that is not supposed to be in the sunflower garden) attracts all of the bees. The tulip sees the farmer coming to weed out any unwanted plants. Seeing the fear in the tulip’s eyes, the sunflower helps disguise the tulip. The farmer is tricked and the tulip is saved. With the tulip disguised, the bee chooses to pollinate the sunflower instead. Through this story, the goal is to show that the good in helping others does not just stop at the recipient getting what they want. Eventually the good deed is repaid without that being the original intent. According to Dictionary.com, karma is defined as an action, seen as bringing upon oneself inevitable results, good or bad, either in this life or in a reincarnation This is the basic underlying principle of this story. http://markreisch.blogspot.com The final animation exhibits a four minute visual story
    • …
    corecore