112 research outputs found

    Physics-based Reconstruction and Animation of Humans

    Get PDF
    Creating digital representations of humans is of utmost importance for applications ranging from entertainment (video games, movies) to human-computer interaction and even psychiatrical treatments. What makes building credible digital doubles difficult is the fact that the human vision system is very sensitive to perceiving the complex expressivity and potential anomalies in body structures and motion. This thesis will present several projects that tackle these problems from two different perspectives: lightweight acquisition and physics-based simulation. It starts by describing a complete pipeline that allows users to reconstruct fully rigged 3D facial avatars using video data coming from a handheld device (e.g., smartphone). The avatars use a novel two-scale representation composed of blendshapes and dynamic detail maps. They are constructed through an optimization that integrates feature tracking, optical flow, and shape from shading. Continuing along the lines of accessible acquisition systems, we discuss a framework for simultaneous tracking and modeling of articulated human bodies from RGB-D data. We show how semantic information can be extracted from the scanned body shapes. In the second half of the thesis, we will deviate from using standard linear reconstruction and animation models, and rather focus on exploiting physics-based techniques that are able to incorporate complex phenomena such as dynamics, collision response and incompressibility of the materials. The first approach we propose assumes that each 3D scan of an actor records his body in a physical steady state and uses a process called inverse physics to extract a volumetric physics-ready anatomical model of him. By using biologically-inspired growth models for the bones, muscles and fat, our method can obtain realistic anatomical reconstructions that can be later on animated using external tracking data such as the one resulting from tracking motion capture markers. This is then extended to a novel physics-based approach for facial reconstruction and animation. We propose a facial animation model which simulates biomechanical muscle contractions in a volumetric head model in order to create the facial expressions seen in the input scans. We then show how this approach allows for new avenues of dynamic artistic control, simulation of corrective facial surgery, and interaction with external forces and objects

    Neural Volumetric Blendshapes: Computationally Efficient Physics-Based Facial Blendshapes

    Full text link
    Computationally weak systems and demanding graphical applications are still mostly dependent on linear blendshapes for facial animations. The accompanying artifacts such as self-intersections, loss of volume, or missing soft tissue elasticity can be avoided by using physics-based animation models. However, these are cumbersome to implement and require immense computational effort. We propose neural volumetric blendshapes, an approach that combines the advantages of physics-based simulations with realtime runtimes even on consumer-grade CPUs. To this end, we present a neural network that efficiently approximates the involved volumetric simulations and generalizes across human identities as well as facial expressions. Our approach can be used on top of any linear blendshape system and, hence, can be deployed straightforwardly. Furthermore, it only requires a single neutral face mesh as input in the minimal setting. Along with the design of the network, we introduce a pipeline for the challenging creation of anatomically and physically plausible training data. Part of the pipeline is a novel hybrid regressor that densely positions a skull within a skin surface while avoiding intersections. The fidelity of all parts of the data generation pipeline as well as the accuracy and efficiency of the network are evaluated in this work. Upon publication, the trained models and associated code will be released

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    Analysis of Design Principles and Requirements for Procedural Rigging of Bipeds and Quadrupeds Characters with Custom Manipulators for Animation

    Full text link
    Character rigging is a process of endowing a character with a set of custom manipulators and controls making it easy to animate by the animators. These controls consist of simple joints, handles, or even separate character selection windows.This research paper present an automated rigging system for quadruped characters with custom controls and manipulators for animation.The full character rigging mechanism is procedurally driven based on various principles and requirements used by the riggers and animators. The automation is achieved initially by creating widgets according to the character type. These widgets then can be customized by the rigger according to the character shape, height and proportion. Then joint locations for each body parts are calculated and widgets are replaced programmatically.Finally a complete and fully operational procedurally generated character control rig is created and attached with the underlying skeletal joints. The functionality and feasibility of the rig was analyzed from various source of actual character motion and a requirements criterion was met. The final rigged character provides an efficient and easy to manipulate control rig with no lagging and at high frame rate.Comment: 21 pages, 24 figures, 4 Algorithms, Journal Pape

    OPEN SOURCE RIGGING IN BLENDER: A MODULAR APPROACH

    Get PDF
    Character rigs control characters in the traditional CG pipeline. This thesis examines the rig creation process and discusses several problems inherant in the traditional workflow--excessive time spent and a lack of uniformity, and proposes a software plugin which solves these issues. This thesis describes the creation of a tool for Blender 3D which automates the rigging process yet keeps the creativity and control in the hands of the user. A character rig designed by this tool will be fully functional--yet capable of being split into its component parts and reconstructed as the user determines. These body parts are individually scripted with the intent of maximizing reusability, and the code and rigs are distributed to the open source community for vetting. The final tool has been downloaded many times by the Blender community and has met with very postive responses

    Automatic rigging and animation of 3D characters

    Get PDF
    Animating an articulated 3D character currently requires manual rigging to specify its internal skeletal structure and to define how the input motion deforms its surface. We present a method for animating characters automatically. Given a static character mesh and a generic skeleton, our method adapts the skeleton to the character and attaches it to the surface, allowing skeletal motion data to animate the character. Because a single skeleton can be used with a wide range of characters, our method, in conjunction with a library of motions for a few skeletons, enables a user-friendly animation system for novices and children. Our prototype implementation, called Pinocchio, typically takes under a minute to rig a character on a modern midrange PC.Solidworks CorporationNational Science Foundation (U.S.). Graduate Research Fellowshi

    Tool for spatial and dynamic representation of artistic performances

    Get PDF
    This proposal aims to explore the use of available technologies for video representation of sets and performers in order to serve as support for composition processes and artistic performer rehearsals, while focusing in representing the performer’s body and its movements, and its relation with objects belonging to the three-dimensional space of their performances. This project’s main goal is to design and develop a system that can spatially represent the performer and its movements, by means of capturing processes and reconstruction using a camera device, as well as enhance the three-dimensional space where the performance occurs by allowing interaction with virtual objects and by adding a video component, either for documentary purposes, or for live performances effects (for example, using video mapping video techniques in captured video or projection during a performance)

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained
    corecore