3,088 research outputs found

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Real-Time Character Animation for Computer Games

    Get PDF
    The importance of real-time character animation in computer games has increased considerably over the past decade. Due to advances in computer hardware and the achievement of great increases in computational speed, the demand for more realism in computer games is continuously growing. This paper will present and discuss various methods of 3D character animation and prospects of their real-time application, ranging from the animation of simple articulated objects to real-time deformable object meshes

    Sketching-out virtual humans: A smart interface for human modelling and animation

    Get PDF
    In this paper, we present a fast and intuitive interface for sketching out 3D virtual humans and animation. The user draws stick figure key frames first and chooses one for “fleshing-out” with freehand body contours. The system automatically constructs a plausible 3D skin surface from the rendered figure, and maps it onto the posed stick figures to produce the 3D character animation. A “creative model-based method” is developed, which performs a human perception process to generate 3D human bodies of various body sizes, shapes and fat distributions. In this approach, an anatomical 3D generic model has been created with three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially through rigid morphing, fatness morphing, and surface fitting to match the original 2D sketch. An auto-beautification function is also offered to regularise the 3D asymmetrical bodies from users’ imperfect figure sketches. Our current system delivers character animation in various forms, including articulated figure animation, 3D mesh model animation, 2D contour figure animation, and even 2D NPR animation with personalised drawing styles. The system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Embedded Implicit Stand-ins for Animated Meshes: a Case of Hybrid Modelling

    Get PDF
    In this paper we address shape modelling problems, encountered in computer animation and computer games development that are difficult to solve just using polygonal meshes. Our approach is based on a hybrid modelling concept that combines polygonal meshes with implicit surfaces. A hybrid model consists of an animated polygonal mesh and an approximation of this mesh by a convolution surface stand-in that is embedded within it or is attached to it. The motions of both objects are synchronised using a rigging skeleton. This approach is used to model the interaction between an animated mesh object and a viscoelastic substance, normally modelled in implicit form. The adhesive behaviour of the viscous object is modelled using geometric blending operations on the corresponding implicit surfaces. Another application of this approach is the creation of metamorphosing implicit surface parts that are attached to an animated mesh. A prototype implementation of the proposed approach and several examples of modelling and animation with near real-time preview times are presented

    Capture, Learning, and Synthesis of 3D Speaking Styles

    Full text link
    Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201

    Automatic Cage Construction for Retargeted Muscle Fitting

    Get PDF
    The animation of realistic characters necessitates the construction of complicated anatomical structures such as muscles, which allow subtle shape variation of the character's outer surface to be displayed believably. Unfortunately despite numerous efforts, the modelling of muscle structures is still left for an animator who has to painstakingly build up piece by piece, making it a very tedious process. What is even more frustrating is the animator has to build the same muscle structure for every new character. We propose a muscle retargeting technique to help an animator to automatically construct a muscle structure by reusing an already built and tested model (the template model). Our method defines a spatial transfer between the template model and a new model based on the skin surface and the rigging structure. To ensure that the retargeted muscle is tightly packed inside a new character, we define a novel spatial optimization based on spherical parameterization. Our method requires no manual input, meaning that an animator does not require anatomical knowledge to create realistic accurate musculature models

    A framework for natural animation of digitized models

    No full text
    We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data
    corecore