12,837 research outputs found

    Creating Simplified 3D Models with High Quality Textures

    Get PDF
    This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures. This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model. The proposed method is implemented in real-time by means of GPU parallel processing. Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model. Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Page 1 -

    Morphological operators for very low bit rate video coding

    Get PDF
    This paper deals with the use of some morphological tools for video coding at very low bit rates. Rather than describing a complete coding algorithm, the purpose of this paper is to focus on morphological connected operators and segmentation tools that have proved to be attractive for compression.Peer ReviewedPostprint (published version

    Discrete curvature approximations and segmentation of polyhedral surfaces

    Get PDF
    The segmentation of digitized data to divide a free form surface into patches is one of the key steps required to perform a reverse engineering process of an object. To this end, discrete curvature approximations are introduced as the basis of a segmentation process that lead to a decomposition of digitized data into areas that will help the construction of parametric surface patches. The approach proposed relies on the use of a polyhedral representation of the object built from the digitized data input. Then, it is shown how noise reduction, edge swapping techniques and adapted remeshing schemes can participate to different preparation phases to provide a geometry that highlights useful characteristics for the segmentation process. The segmentation process is performed with various approximations of discrete curvatures evaluated on the polyhedron produced during the preparation phases. The segmentation process proposed involves two phases: the identification of characteristic polygonal lines and the identification of polyhedral areas useful for a patch construction process. Discrete curvature criteria are adapted to each phase and the concept of invariant evaluation of curvatures is introduced to generate criteria that are constant over equivalent meshes. A description of the segmentation procedure is provided together with examples of results for free form object surfaces

    Generating 3D faces using Convolutional Mesh Autoencoders

    Full text link
    Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://github.com/anuragranj/com
    • …
    corecore