816 research outputs found

    Robust Non-Rigid Registration with Reweighted Position and Transformation Sparsity

    Get PDF
    Non-rigid registration is challenging because it is ill-posed with high degrees of freedom and is thus sensitive to noise and outliers. We propose a robust non-rigid registration method using reweighted sparsities on position and transformation to estimate the deformations between 3-D shapes. We formulate the energy function with position and transformation sparsity on both the data term and the smoothness term, and define the smoothness constraint using local rigidity. The double sparsity based non-rigid registration model is enhanced with a reweighting scheme, and solved by transferring the model into four alternately-optimized subproblems which have exact solutions and guaranteed convergence. Experimental results on both public datasets and real scanned datasets show that our method outperforms the state-of-the-art methods and is more robust to noise and outliers than conventional non-rigid registration methods.Comment: IEEE Transactions on Visualization and Computer Graphic

    RECREATING AND SIMULATING DIGITAL COSTUMES FROM A STAGE PRODUCTION OF \u3ci\u3eMEDEA\u3c/i\u3e

    Get PDF
    This thesis investigates a technique to effectively construct and simulate costumes from a stage production Medea, in a dynamic cloth simulation application like Maya\u27s nDynamics. This was done by using data collected from real-world fabric tests and costume construction in the theatre\u27s costume studio. Fabric tests were conducted and recorded, by testing costume fabrics for drape and behavior with two collision objects. These tests were recreated digitally in Maya to derive appropriate parameters for the digital fabric, by comparing with the original reference. Basic mannequin models were created using the actors\u27 measurements and skeleton-rigged to enable animation. The costumes were then modeled and constrained according to the construction process observed in the costume studio to achieve the same style and stitch as the real costumes. Scenes selected and recorded from Medea were used as reference to animate the actors\u27 models. The costumes were assigned the parameters derived from the fabric tests to produce the simulations. Finally, the scenes were lit and rendered out to obtain the final videos which were compared to the original recordings to ascertain the accuracy of simulation. By obtaining and refining simulation parameters from simple fabric collision tests, and modeling the digital costumes following the procedures derived from real-life costume construction, realistic costume simulation was achieved

    Template-based Monocular 3-D Shape Reconstruction And Tracking Using Laplacian Meshes

    Get PDF
    This thesis addresses the problem of recovering the 3-D shape of a deformable object in single images, or image sequences acquired by a monocular video camera, given that a 3-D template shape and a template image of the object are available. While being a very challenging problem in computer vision, being able to reconstruct and track 3-D deformable objects in videos allows us to develop many potential applications ranging from sports and entertainments to engineering and medical imaging. This thesis extends the scope of deformable object modeling to real-world applications of fully 3-D modeling of deformable objects from video streams with a number of contributions. We show that by extending the Laplacian formalism, which was first introduced in the Graphics community to regularize 3-D meshes, we can turn the monocular 3-D shape reconstruction of a deformable object given correspondences with a reference image into a much better-posed problem with far fewer degrees of freedom than the original one. This has proved key to achieving real-time performance while preserving both sufficient flexibility and robustness. Our real-time 3-D reconstruction and tracking system of deformable objects can very quickly reject outlier correspondences and accurately reconstruct the object shape in 3D. Frame-to-frame tracking is exploited to track the object under difficult settings such as large deformations, occlusions, illumination changes, and motion blur. We present an approach to solving the problem of dense image registration and 3-D shape reconstruction of deformable objects in the presence of occlusions and minimal texture. A main ingredient is the pixel-wise relevancy score that we use to weigh the influence of the image information from a pixel in the image energy cost function. A careful design of the framework is essential for obtaining state-of-the-art results in recovering 3-D deformations of both well- and poorly-textured objects in the presence of occlusions. We study the problem of reconstructing 3-D deformable objects interacting with rigid ones. Imposing real physical constraints allows us to model the interactions of objects in the real world more accurately and more realistically. In particular, we study the problem of a ball colliding with a bat observed by high speed cameras. We provide quantitative measurements of the impact that are compared with simulation-based methods to evaluate which simulation predictions most accurately describe a physical quantity of interest and to improve the models. Based on the diffuse property of the tracked deformable object, we propose a method to estimate the environment irradiance map represented by a set of low frequency spherical harmonics. The obtained irradiance map can be used to realistically illuminate 2-D and 3-D virtual contents in the context of augmented reality on deformable objects. The results compare favorably with baseline methods. In collaboration with Disney Research, we develop an augmented reality coloring book application that runs in real-time on mobile devices. The app allows the children to see the coloring work by showing animated characters with texture lifted from their colors on the drawing. Deformations of the book page are explicitly modeled by our 3-D tracking and reconstruction method. As a result, accurate color information is extracted to synthesize the character's texture

    Animated statues

    Full text link

    Live Texturing of Augmented Reality Characters from Colored Drawings

    Get PDF
    Coloring books capture the imagination of children and provide them with one of their earliest opportunities for creative expression. However, given the proliferation and popularity of digital devices, real-world activities like coloring can seem unexciting, and children become less engaged in them. Augmented reality holds unique potential to impact this situation by providing a bridge between real-world activities and digital enhancements. In this paper, we present an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device. The drawing is detected and tracked, and the video stream is augmented with an animated 3-D version of the character that is textured according to the child's coloring. This is possible thanks to several novel technical contributions. We present a texturing process that applies the captured texture from a 2-D colored drawing to both the visible and occluded regions of a 3-D character in real time. We develop a deformable surface tracking method designed for colored drawings that uses a new outlier rejection algorithm for real-time tracking and surface deformation recovery. We present a content creation pipeline to efficiently create the 2-D and 3-D content. And, finally, we validate our work with two user studies that examine the quality of our texturing algorithm and the overall App experience

    MagicPony: Learning Articulated 3D Animals in the Wild

    Full text link
    We consider the problem of learning a function that can estimate the 3D shape, articulation, viewpoint, texture, and lighting of an articulated animal like a horse, given a single test image. We present a new method, dubbed MagicPony, that learns this function purely from in-the-wild single-view images of the object category, with minimal assumptions about the topology of deformation. At its core is an implicit-explicit representation of articulated shape and appearance, combining the strengths of neural fields and meshes. In order to help the model understand an object's shape and pose, we distil the knowledge captured by an off-the-shelf self-supervised vision transformer and fuse it into the 3D model. To overcome common local optima in viewpoint estimation, we further introduce a new viewpoint sampling scheme that comes at no added training cost. Compared to prior works, we show significant quantitative and qualitative improvements on this challenging task. The model also demonstrates excellent generalisation in reconstructing abstract drawings and artefacts, despite the fact that it is only trained on real images.Comment: Project Page: https://3dmagicpony.github.io
    • …
    corecore