59 research outputs found

    Learning from the Artist: Theory and Practice of Example-Based Character Deformation

    No full text
    Movie and game production is very laborious, frequently involving hundreds of person-years for a single project. At present this work is difficult to fully automate, since it involves subjective and artistic judgments. Broadly speaking, in this thesis we explore an approach that works with the artist, accelerating their work without attempting to replace them. More specifically, we describe an “example-based” approach, in which artists provide examples of the desired shapes of the character, and the results gradually improve as more examples are given. Since a character’s skin shape deforms as the pose or expression changes, or particular problem will be termed character deformation. The overall goal of this thesis is to contribute a complete investigation and development of an example-based approach to character deformation. A central observation guiding this research is that character animation can be formulated as a high-dimensional problem, rather than the two- or three-dimensional viewpoint that is commonly adopted in computer graphics. A second observation guiding our inquiry is that statistical learning concepts are relevant. We show that example-based character animation algorithms can be informed, developed, and improved using these observations. This thesis provides definitive surveys of example-based facial and body skin deformation. This thesis analyzes the two leading families of example-based character deformation algorithms from the point of view of statistical regression. In doing so we show that a wide variety of existing tools in machine learning are applicable to our problem. We also identify several techniques that are not suitable due to the nature of the training data, and the high-dimensional nature of this regression problem. We evaluate the design decisions underlying these example-based algorithms, thus providing the groundwork for a ”best practice” choice of specific algorithms. This thesis develops several new algorithms for accelerating example-based facial animation. The first algorithm allows unspecified degrees of freedom to be automatically determined based on the style of previous, completed animations. A second algorithm allows rapid editing and control of the process of transferring motion capture of a human actor to a computer graphics character. The thesis identifies and develops several unpublished relations between the underlying mathematical techniques. Lastly, the thesis provides novel tutorial derivations of several mathematical concepts, using only the linear algebra tools that are likely to be familiar to experts in computer graphics. Portions of the research in this thesis have been published in eight papers, with two appearing in premier forums in the field

    3DCGキャラクタの表現の改善法と実時間操作に関する研究

    Get PDF
    早大学位記番号:新8176早稲田大

    Realtime Face Tracking and Animation

    Get PDF
    Capturing and processing human geometry, appearance, and motion is at the core of computer graphics, computer vision, and human-computer interaction. The high complexity of human geometry and motion dynamics, and the high sensitivity of the human visual system to variations and subtleties in faces and bodies make the 3D acquisition and reconstruction of humans in motion a challenging task. Digital humans are often created through a combination of 3D scanning, appearance acquisition, and motion capture, leading to stunning results in recent feature films. However, these methods typically require complex acquisition systems and substantial manual post-processing. As a result, creating and animating high-quality digital avatars entails long turn-around times and substantial production costs. Recent technological advances in RGB-D devices, such as Microsoft Kinect, brought new hopes for realtime, portable, and affordable systems allowing to capture facial expressions as well as hand and body motions. RGB-D devices typically capture an image and a depth map. This permits to formulate the motion tracking problem as a 2D/3D non-rigid registration of a deformable model to the input data. We introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. This led to unprecedented face tracking quality on a low cost consumer level device. The main drawback of this approach in the context of consumer applications is the need for an offline user-specific training. Robust and efficient tracking is achieved by building an accurate 3D expression model of the user's face who is scanned in a predefined set of facial expressions. We extended this approach removing the need of a user-specific training or calibration, or any other form of manual assistance, by modeling online a 3D user-specific dynamic face model. In complement of a realtime face tracking and modeling algorithm, we developed a novel system for animation retargeting that allows learning a high-quality mapping between motion capture data and arbitrary target characters. We addressed one of the main challenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the correspondence between source and target expression spaces. We showed that this number can be significantly reduced by leveraging the information contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. Finally, we present a novel realtime physics-based animation technique allowing to simulate a large range of deformable materials such as fat, flesh, hair, or muscles. This approach could be used to produce more lifelike animations by enhancing the animated avatars with secondary effects. We believe that the realtime face tracking and animation pipeline presented in this thesis has the potential to inspire numerous future research in the area of computer-generated animation. Already, several ideas presented in thesis have been successfully used in industry and this work gave birth to the startup company faceshift AG

    Pde surface-represented facial blendshapes

    Get PDF
    Partial differential equation (PDE)-based geometric modelling and computer animation has been extensively investigated in the last three decades. However, the PDE surface-represented facial blendshapes have not been investigated. In this paper, we propose a new method of facial blendshapes by using curve-defined and Fourier series-represented PDE surfaces. In order to develop this new method, first, we design a curve template and use it to extract curves from polygon facial models. Then, we propose a second-order partial differential equation and combine it with the constraints of the extracted curves as boundary curves to develop a mathematical model of curve-defined PDE surfaces. After that, we introduce a generalized Fourier series representation to solve the second-order partial differential equation subjected to the constraints of the extracted boundary curves and obtain an analytical mathematical expression of curve-defined and Fourier series-represented PDE surfaces. The mathematical expression is used to develop a new PDE surface-based interpolation method of creating new facial models from one source facial model and one target facial model and a new PDE surface-based blending method of creating more new facial models from one source facial model and many target facial models. Some examples are presented to demonstrate the effectiveness and applications of the proposed method in 3D facial blendshapes

    Personality and Mood for Non-Player Characters: A Method for Behavior Simulation in a Maze Environment

    Get PDF
    When it comes to video games, immersion is key. All types of games aim to keep the player immersed in some form or another. A common aspect of the immersive world in most role-playing games -- but not exclusive to the genre -- is the non-playable character (NPC). At their best, NPCs play an integral role to the sense of immersion the player feels by behaving in a way that feels believable and fits within the world of the game. However, due to lack of innovation in this area of video games, at their worst NPCs can jar the player out of the immersive state of flow with unnatural behavior. In an effort towards making non-playable characters (NPCs) in games smarter, more believable, and more immersive, a method based in psychological theory for controlling the behavior of NPCs was developed. Based on a behavior model similar to most modern games, our behavior model for NPCs traverses a behavior tree. A novel method was introduced using the five-factor model of personality (also known as the big-five personality traits) and the circumplex model of affect (a model of emotion) to inform the traversal of the behavior tree of NPCs. This behavior model has two main beneficial outcomes. The first is emergent gameplay, resulting in unplanned, unpredictable experiences in games which feel closer to natural behavior, leading to an increase in immersion. This can be used for complex storytelling as well by offering information about an NPC\u27s personality to be used in the narrative of games. Secondly, the model is able to provide the emotional status of an NPC in real time. This capability allows developers to programmatically display facial and body expression, eschewing the current time-consuming approach of artist-choreographed animation. Finally, a maze simulation environment was constructed to test the results of our behavior model and procedural animation. The data collected from 100 iterations in our maze simulation environment about our behavior model found that a correlation can be observed between traits and actions, showing that emergent gameplay can be achieved by varying personality traits. Additionally, by incorporating a novel method for procedural animation based on real-time emotion data, a more realistic representation of human behavior is achieved

    Description-based visualisation of ethnic facial types

    Get PDF
    This study reports on the design and evaluation of a tool to assist in the description and visualisation of the human face and variations in facial shape and proportions characteristic of different ethnicities. A comprehensive set of local shape features (sulci, folds, prominences, slopes, fossae, etc.) which constitute a visually-discernible ‘vocabulary’ for facial description. Each such feature has one or more continuous-valued attributes, some of which are dimensional and correspond directly to conventional anthropometric distance measurements between facial landmarks, while other attributes capture the shape or topography of that given feature. These attributes, distributed over six facial regions (eyes, nose, etc.), control a morphable model of facial shape that can approximate individual faces as well as the averaged faces of various ethnotypes. Clues to ethnic origin are often more effectively conveyed by shape attributes than through differences in anthropometric measurements due to large individual differences in facial dimensions within each ethnicity. Individual faces of representative ethnicities (European, East Asian, etc.) can then be modelled to establish the range of variation of the attributes (each represented by a corresponding three-dimensional ‘basis shape’). These attributes are designed to be quasi-orthogonal, in that the model can assume attribute values in arbitrary combination with minimal undesired interaction. They thus can serve as the basis of a set of dimensions or degrees of freedom. The space of variation in facial shape defines an ethnicity face space (EFS), suitable for the human appreciation of facial variation across ethnicities, in contrast to a conventional identity face space (IFS) intended for automated detection of individual faces out of a sample set of faces from a single, homogeneous population. The dimensions comprising an IFS are based on holistic measurements and are usually not interpretable in terms of local facial dimensions or shape (i.e., they are not ‘semantic’). In contrast, for an EFS to facilitate our understanding of ethnic variation across faces (as opposed to ethnicity recognition) the underlying dimensions should correspond to visibly-discernible attributes. A shift from quantitative landmark-based anthropometric comparisons to local shape comparisons is demonstrated. Ethnic variation can be visually appreciated by observing the changes in a model through animation. These changes can be tracked at different levels of complexity: across the whole face, by selected facial region, by isolated feature, and by isolated attribute of a given feature. This study demonstrates that an intuitive feature set, derived by artistically-informed visual observation, can provide a workable descriptive basis. While neither mathematically-complete nor strictly orthogonal, the feature space permits close surface fits between the morphable model and face scan data. This study is intended for the human visual appreciation of facial shape, the characteristics of differing ethnicities, and the quantification of those differences. It presumes a basic understanding of the standard practices in digital facial animation

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
    corecore