651 research outputs found

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    Automatic Cage Construction for Retargeted Muscle Fitting

    Get PDF
    The animation of realistic characters necessitates the construction of complicated anatomical structures such as muscles, which allow subtle shape variation of the character's outer surface to be displayed believably. Unfortunately despite numerous efforts, the modelling of muscle structures is still left for an animator who has to painstakingly build up piece by piece, making it a very tedious process. What is even more frustrating is the animator has to build the same muscle structure for every new character. We propose a muscle retargeting technique to help an animator to automatically construct a muscle structure by reusing an already built and tested model (the template model). Our method defines a spatial transfer between the template model and a new model based on the skin surface and the rigging structure. To ensure that the retargeted muscle is tightly packed inside a new character, we define a novel spatial optimization based on spherical parameterization. Our method requires no manual input, meaning that an animator does not require anatomical knowledge to create realistic accurate musculature models

    A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint

    Get PDF
    3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy term. With increasing availability of 3D shapes on the Internet, data-driven methods were proposed to improve the editing results. More recently as the deep neural networks became popular, many deep learning based editing methods have been developed in this field, which is naturally data-driven. We mainly survey recent research works from the geometric viewpoint to those emerging neural deformation techniques and categorize them into organic shape editing methods and man-made model editing methods. Both traditional methods and recent neural network based methods are reviewed

    Sketch-based character prototyping by deformation

    Get PDF
    Master'sMASTER OF SCIENC

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    3D mesh metamorphosis from spherical parameterization for conceptual design

    Get PDF
    Engineering product design is an information intensive decision-making process that consists of several phases including design specification definition, design concepts generation, detailed design and analysis, and manufacturing. Usually, generating geometry models for visualization is a big challenge for early stage conceptual design. Complexity of existing computer aided design packages constrains participation of people with various backgrounds in the design process. In addition, many design processes do not take advantage of the rich amount of legacy information available for new concepts creation. The research presented here explores the use of advanced graphical techniques to quickly and efficiently merge legacy information with new design concepts to rapidly create new conceptual product designs. 3D mesh metamorphosis framework 3DMeshMorpher was created to construct new models by navigating in a shape-space of registered design models. The framework is composed of: i) a fast spherical parameterization method to map a geometric model (genus-0) onto a unit sphere; ii) a geometric feature identification and picking technique based on 3D skeleton extraction; and iii) a LOD controllable 3D remeshing scheme with spherical mesh subdivision based on the developedspherical parameterization. This efficient software framework enables designers to create numerous geometric concepts in real time with a simple graphical user interface. The spherical parameterization method is focused on closed genus-zero meshes. It is based upon barycentric coordinates with convex boundary. Unlike most existing similar approaches which deal with each vertex in the mesh equally, the method developed in this research focuses primarily on resolving overlapping areas, which helps speed the parameterization process. The algorithm starts by normalizing the source mesh onto a unit sphere and followed by some initial relaxation via Gauss-Seidel iterations. Due to its emphasis on solving only challenging overlapping regions, this parameterization process is much faster than existing spherical mapping methods. To ensure the correspondence of features from different models, we introduce a skeleton based feature identification and picking method for features alignment. Unlike traditional methods that align single point for each feature, this method can provide alignments for complete feature areas. This could help users to create more reasonable intermediate morphing results with preserved topological features. This skeleton featuring framework could potentially be extended to automatic features alignment for geometries with similar topologies. The skeleton extracted could also be applied for other applications such as skeleton-based animations. The 3D remeshing algorithm with spherical mesh subdivision is developed to generate a common connectivity for different mesh models. This method is derived from the concept of spherical mesh subdivision. The local recursive subdivision can be set to match the desired LOD (level of details) for source spherical mesh. Such LOD is controllable and this allows various outputs with different resolutions. Such recursive subdivision then follows by a triangular correction process which ensures valid triangulations for the remeshing. And the final mesh merging and reconstruction process produces the remeshing model with desired LOD specified from user. Usually the final merged model contains all the geometric details from each model with reasonable amount of vertices, unlike other existing methods that result in big amount of vertices in the merged model. Such multi-resolution outputs with controllable LOD could also be applied in various other computer graphics applications such as computer games
    corecore