8 research outputs found

    RE@CT - Immersive Production and Delivery of Interactive 3D Content

    No full text
    International audienceThis paper describes the aims and concepts of the FP7 RE@CT project. Building upon the latest advances in 3D capture and free-viewpoint video RE@CT aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion

    3D video performance segmentation

    Full text link
    We present a novel approach that achieves segmentation of subject body parts in 3D videos. 3D video consists in a free-viewpoint video of real-world subjects in motion immersed in a virtual world. Each 3D video frame is composed of one or several 3D models. A topology dictionary is used to cluster 3D video sequences with respect to the model topology and shape. The topology is characterized using Reeb graph-based descriptors and no prior explicit model on the subject shape is necessary to perform the clustering process. In this frame-work, the dictionary consists in a set of training input poses with a priori segmentation and labels. As a consequence, all identified frames of 3D video sequences can be automatically segmented. Finally, motion flows computed between consec-utive frames are used to transfer segmented region labels to unidentified frames. Our method allows us to perform robust body part segmentation and tracking in 3D cinema sequences. Index Terms — 3D video, topology dictionary, shape matching, body segmentation 1

    SPA: Sparse Photorealistic Animation using a single RGB-D camera

    Get PDF
    Photorealistic animation is a desirable technique for computer games and movie production. We propose a new method to synthesize plausible videos of human actors with new motions using a single cheap RGB-D camera. A small database is captured in a usual office environment, which happens only once for synthesizing different motions. We propose a markerless performance capture method using sparse deformation to obtain the geometry and pose of the actor for each time instance in the database. Then, we synthesize an animation video of the actor performing the new motion that is defined by the user. An adaptive model-guided texture synthesis method based on weighted low-rank matrix completion is proposed to be less sensitive to noise and outliers, which enables us to easily create photorealistic animation videos with new motions that are different from the motions in the database. Experimental results on the public dataset and our captured dataset have verified the effectiveness of the proposed method

    RE@CT - Immersive Production and Delivery of Interactive 3D Content

    Get PDF
    International audienceThis paper describes the aims and concepts of the FP7 RE@CT project. Building upon the latest advances in 3D capture and free-viewpoint video RE@CT aims to revolutionise the production of realistic characters and significantly reduce costs by developing an automated process to extract and represent animated characters from actor performance capture in a multiple camera studio. The key innovation is the development of methods for analysis and representation of 3D video to allow reuse for real-time interactive animation. This will enable efficient authoring of interactive characters with video quality appearance and motion

    Human motion synthesis from 3d video

    Get PDF
    Multiple view 3D video reconstruction of actor performance captures a level-of-detail for body and clothing movement which is time-consuming to produce using existing animation tools. In this paper we present a framework for concatenative synthesis from multiple 3D video sequences according to user constraints on movement, position and timing. Multiple 3D video sequences of an actor performing different movements are automatically constructed into a surface motion graph which represents the possible transitions with similar shape and motion between sequences without unnatural movement artefacts. Shape similarity over an adaptive temporal window is used to identify transitions between 3D video sequences. Novel 3D video sequences are synthesized by finding the optimal path in the surface motion graph between user specified key-frames for control of movement, location and timing. The optimal path which satisfies the user constraints whilst minimizing the total transition cost between 3D video sequences is found using integer linear programming. Results demonstrate that this framework allows flexible production of novel 3D video sequences which preserve the detailed dynamics of the captured movement for actress with loose clothing and long hair without visible artefacts. 1
    corecore