10 research outputs found

    2.5D cartoon models

    Get PDF
    We present a way to bring cartoon objects and characters into the third dimension, by giving them the ability to rotate and be viewed from any angle. We show how 2D vector art drawings of a cartoon from different views can be used to generate a novel structure, the 2.5D cartoon model, which can be used to simulate 3D rotations and generate plausible renderings of the cartoon from any view. 2.5D cartoon models are easier to create than a full 3D model, and retain the 2D nature of hand-drawn vector art, supporting a wide range of stylizations that need not correspond to any real 3D shape.MathWorks, Inc. (Fellowship

    3D performance capture for facial animation

    Get PDF
    This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence

    Generation of 3D characters from existing cartoons and a unified pipeline for animation and video games.

    Get PDF
    Despite the remarkable growth of 3D animation in the last twenty years, 2D is still popular today and often employed for both films and video games. In fact, 2D offers important economic and artistic advantages to production. In this thesis has been introduced an innovative system to generate 3D character from 2D cartoons, while maintaining important 2D features in 3D as well. However, handling 2D characters and animation in a 3D environment is not a trivial task, as they do not possess any depth information. Three different solutions have been proposed in this thesis. A 2.5D modelling method, which exploits billboarding, parallax scrolling and 2D shape interpolation to simulate the depth between the different body parts of the characters. Two additional full 3D solution have been presented. One based on inflation and supported by a surface registration method, and one that produces more accurate approximations by using information from the side views to solve an optimization problem. These methods have been introduced into a new unified pipeline that involves a game engine, and that could be used for animation and video games production. A unified pipeline introduces several benefits to animation production for either 2D and 3D content. On one hand, assets can be shared for different productions and media. On the other hand, real-time rendering for animated films allows immediate previews of the scenes and offers artists a way to experiment more during the making of a scene

    Vector Graphics Animation with Time-Varying Topology

    Get PDF
    International audienceWe introduce the Vector Animation Complex (VAC), a novel data structure for vector graphics animation, designed to support themodeling of time-continuous topological events. This allows features of a connected drawing to merge, split, appear, or disappear atdesired times via keyframes that introduce the desired topological change. Because the resulting space-time complex directly capturesthe time-varying topological structure, features are readily edited in both space and time in a way that reflects the intent of the drawing.A formal description of the data structure is provided, along with topological and geometric invariants. We illustrate our modelingparadigm with experimental results on various examples

    Animation of a hierarchical image based facial model and perceptual analysis of visual speech

    Get PDF
    In this Thesis a hierarchical image-based 2D talking head model is presented, together with robust automatic and semi-automatic animation techniques, and a novel perceptual method for evaluating visual-speech based on the McGurk effect. The novelty of the hierarchical facial model stems from the fact that sub-facial areas are modelled individually. To produce a facial animation, animations for a set of chosen facial areas are first produced, either by key-framing sub-facial parameter values, or using a continuous input speech signal, and then combined into a full facial output. Modelling hierarchically has several attractive qualities. It isolates variation in sub-facial regions from the rest of the face, and therefore provides a high degree of control over different facial parts along with meaningful image based animation parameters. The automatic synthesis of animations may be achieved using speech not originally included in the training set. The model is also able to automatically animate pauses, hesitations and non-verbal (or non-speech related) sounds and actions. To automatically produce visual-speech, two novel analysis and synthesis methods are proposed. The first method utilises a Speech-Appearance Model (SAM), and the second uses a Hidden Markov Coarticulation Model (HMCM) - based on a Hidden Markov Model (HMM). To evaluate synthesised animations (irrespective of whether they are rendered semi automatically, or using speech), a new perceptual analysis approach based on the McGurk effect is proposed. This measure provides both an unbiased and quantitative method for evaluating talking head visual speech quality and overall perceptual realism. A combination of this new approach, along with other objective and perceptual evaluation techniques, are employed for a thorough evaluation of hierarchical model animations.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Animation of a hierarchical image based facial model and perceptual analysis of visual speech

    Get PDF
    In this Thesis a hierarchical image-based 2D talking head model is presented, together with robust automatic and semi-automatic animation techniques, and a novel perceptual method for evaluating visual-speech based on the McGurk effect. The novelty of the hierarchical facial model stems from the fact that sub-facial areas are modelled individually. To produce a facial animation, animations for a set of chosen facial areas are first produced, either by key-framing sub-facial parameter values, or using a continuous input speech signal, and then combined into a full facial output. Modelling hierarchically has several attractive qualities. It isolates variation in sub-facial regions from the rest of the face, and therefore provides a high degree of control over different facial parts along with meaningful image based animation parameters. The automatic synthesis of animations may be achieved using speech not originally included in the training set. The model is also able to automatically animate pauses, hesitations and non-verbal (or non-speech related) sounds and actions. To automatically produce visual-speech, two novel analysis and synthesis methods are proposed. The first method utilises a Speech-Appearance Model (SAM), and the second uses a Hidden Markov Coarticulation Model (HMCM) - based on a Hidden Markov Model (HMM). To evaluate synthesised animations (irrespective of whether they are rendered semi automatically, or using speech), a new perceptual analysis approach based on the McGurk effect is proposed. This measure provides both an unbiased and quantitative method for evaluating talking head visual speech quality and overall perceptual realism. A combination of this new approach, along with other objective and perceptual evaluation techniques, are employed for a thorough evaluation of hierarchical model animations

    GPU-based volume deformation.

    Get PDF

    Automatic in-betweening in computer assisted animation by exploiting 2.5D modelling techniques

    No full text

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems
    corecore