1,059 research outputs found

    RE@CT - Immersive Production and Delivery of Interactive 3D Content

    No full text
    International audienceThis paper describes the aims and concepts of the FP7 RE@CT project. Building upon the latest advances in 3D capture and free-viewpoint video RE@CT aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion

    Learning to Dress {3D} People in Generative Clothing

    Get PDF
    Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.Comment: CVPR-2020 camera ready. Code and data are available at https://cape.is.tue.mpg.d

    Behavioural facial animation using motion graphs and mind maps

    Get PDF
    We present a new behavioural animation method that combines motion graphs for synthesis of animation and mind maps as behaviour controllers for the choice of motions, significantly reducing the cost of animating secondary characters. Motion graphs are created for each facial region from the analysis of a motion database, while synthesis occurs by minimizing the path distance that connects automatically chosen nodes. A Mind map is a hierarchical graph built on top of the motion graphs, where the user visually chooses how a stimulus affects the character's mood, which in turn will trigger motion synthesis. Different personality traits add more emotional complexity to the chosen reactions. Combining behaviour simulation and procedural animation leads to more emphatic and autonomous characters that react differently in each interaction, shifting the task of animating a character to one of defining its behaviour.</p

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people

    Interactive visualization tools for topological exploration

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1992This thesis concerns using computer graphics methods to visualize mathematical objects. Abstract mathematical concepts are extremely difficult to visualize, particularly when higher dimensions are involved; I therefore concentrate on subject areas such as the topology and geometry of four dimensions which provide a very challenging domain for visualization techniques. In the first stage of this research, I applied existing three-dimensional computer graphics techniques to visualize projected four-dimensional mathematical objects in an interactive manner. I carried out experiments with direct object manipulation and constraint-based interaction and implemented tools for visualizing mathematical transformations. As an application, I applied these techniques to visualizing the conjecture known as Fermat's Last Theorem. Four-dimensional objects would best be perceived through four-dimensional eyes. Even though we do not have four-dimensional eyes, we can use computer graphics techniques to simulate the effect of a virtual four-dimensional camera viewing a scene where four-dimensional objects are being illuminated by four-dimensional light sources. I extended standard three-dimensional lighting and shading methods to work in the fourth dimension. This involved replacing the standard "z-buffer" algorithm by a "w-buffer" algorithm for handling occlusion, and replacing the standard "scan-line" conversion method by a new "scan-plane" conversion method. Furthermore, I implemented a new "thickening" technique that made it possible to illuminate surfaces correctly in four dimensions. Our new techniques generate smoothly shaded, highlighted view-volume images of mathematical objects as they would appear from a four-dimensional viewpoint. These images reveal fascinating structures of mathematical objects that could not be seen with standard 3D computer graphics techniques. As applications, we generated still images and animation sequences for mathematical objects such as the Steiner surface, the four-dimensional torus, and a knotted 2-sphere. The images of surfaces embedded in 4D that have been generated using our methods are unique in the history of mathematical visualization. Finally, I adapted these techniques to visualize volumetric data (3D scalar fields) generated by other scientific applications. Compared to other volume visualization techniques, this method provides a new approach that researchers can use to look at and manipulate certain classes of volume data

    Exploring the use of skeletal tracking for cheaper motion graphs and on-set decision making in free-viewpoint video production

    Get PDF
    In free-viewpoint video (FVV), the motion and surface appearance of a real-world performance is captured as an animated mesh. While this technology can produce high-fidelity recreations of actors, the required 3D reconstruction step has substantial processing demands. This means FVV experiences are currently expensive to produce, and the processing delay means on-set decisions are hampered by a lack of feedback. This work explores the possibility of using RGB-camera-based skeletal tracking to reduce the amount of content that must be 3D reconstructed, as well as aiding on-set decision making. One particularly relevant application is in the construction of Motion Graphs, where state-of-the-art techniques require large amounts of content to be 3D reconstructed before a graph can be built, resulting in large amounts of wasted processing effort. Here, we propose the use of skeletons to assess which clips of FVV content to process, resulting in substantial cost savings with a limited impact on performance accuracy. Additionally, we explore how this technique could be utilised on set to reduce the possibility of requiring expensive reshoots
    • …
    corecore