54,185 research outputs found

    A comparison of hole-filling methods in 3D

    Get PDF
    This paper presents a review of the most relevant current techniques that deal with hole-filling in 3D models. Contrary to earlier reports, which approach mesh repairing in a sparse and global manner, the objective of this review is twofold. First, a specific and comprehensive review of hole-filling techniques (as a relevant part in the field of mesh repairing) is carried out. We present a brief summary of each technique with attention paid to its algorithmic essence, main contributions and limitations. Second, a solid comparison between 34 methods is established. To do this, we define 19 possible meaningful features and properties that can be found in a generic hole-filling process. Then, we use these features to assess the virtues and deficiencies of the method and to build comparative tables. The purpose of this review is to make a comparative hole-filling state-of-the-art available to researchers, showing pros and cons in a common framework.• Ministerio de Economía y Competitividad: Proyecto DPI2013-43344-R (I+D+i) • Gobierno de Castilla-La Mancha: Proyecto PEII-2014-017-PpeerReviewe

    Hierarchical Hole-filling For Depth-based View Synthesis In Ftv And 3d Video

    Get PDF
    Methods for hierarchical hole-filling and depth adaptive hierarchical hole-filling and error correcting in 2D images, 3D images, and 3D wrapped images are provided. Hierarchical hole-filling can comprise reducing an image that contains holes, expanding the reduced image, and filling the holes in the image with data obtained from the expanded image. Depth adaptive hierarchical hole-filling can comprise preprocessing the depth map of a 3D wrapped image that contains holes, reducing the preprocessed image, expanding the reduced image, and filling the holes in the 3D wrapped image with data obtained from the expanded image. These methods are can efficiently reduce errors in images and produce 3D images from a 2D images and/or depth map information.Georgia Tech Research Corporatio

    Extreme 3D Face Reconstruction: Seeing Through Occlusions

    Full text link
    Existing single view, 3D face reconstruction methods can produce beautifully detailed 3D results, but typically only for near frontal, unobstructed viewpoints. We describe a system designed to provide detailed 3D reconstructions of faces viewed under extreme conditions, out of plane rotations, and occlusions. Motivated by the concept of bump mapping, we propose a layered approach which decouples estimation of a global shape from its mid-level details (e.g., wrinkles). We estimate a coarse 3D face shape which acts as a foundation and then separately layer this foundation with details represented by a bump map. We show how a deep convolutional encoder-decoder can be used to estimate such bump maps. We further show how this approach naturally extends to generate plausible details for occluded facial regions. We test our approach and its components extensively, quantitatively demonstrating the invariance of our estimated facial details. We further provide numerous qualitative examples showing that our method produces detailed 3D face shapes in viewing conditions where existing state of the art often break down.Comment: Accepted to CVPR'18. Previously titled: "Extreme 3D Face Reconstruction: Looking Past Occlusions

    Towards recovery of complex shapes in meshes using digital images for reverse engineering applications

    Get PDF
    When an object owns complex shapes, or when its outer surfaces are simply inaccessible, some of its parts may not be captured during its reverse engineering. These deficiencies in the point cloud result in a set of holes in the reconstructed mesh. This paper deals with the use of information extracted from digital images to recover missing areas of a physical object. The proposed algorithm fills in these holes by solving an optimization problem that combines two kinds of information: (1) the geometric information available on the surrounding of the holes, (2) the information contained in an image of the real object. The constraints come from the image irradiance equation, a first-order non-linear partial differential equation that links the position of the mesh vertices to the light intensity of the image pixels. The blending conditions are satisfied by using an objective function based on a mechanical model of bar network that simulates the curvature evolution over the mesh. The inherent shortcomings both to the current holefilling algorithms and the resolution of the image irradiance equations are overcom

    Learning quadrangulated patches for 3D shape parameterization and completion

    Full text link
    We propose a novel 3D shape parameterization by surface patches, that are oriented by 3D mesh quadrangulation of the shape. By encoding 3D surface detail on local patches, we learn a patch dictionary that identifies principal surface features of the shape. Unlike previous methods, we are able to encode surface patches of variable size as determined by the user. We propose novel methods for dictionary learning and patch reconstruction based on the query of a noisy input patch with holes. We evaluate the patch dictionary towards various applications in 3D shape inpainting, denoising and compression. Our method is able to predict missing vertices and inpaint moderately sized holes. We demonstrate a complete pipeline for reconstructing the 3D mesh from the patch encoding. We validate our shape parameterization and reconstruction methods on both synthetic shapes and real world scans. We show that our patch dictionary performs successful shape completion of complicated surface textures.Comment: To be presented at International Conference on 3D Vision 2017, 201

    Dictionary Learning-based Inpainting on Triangular Meshes

    Full text link
    The problem of inpainting consists of filling missing or damaged regions in images and videos in such a way that the filling pattern does not produce artifacts that deviate from the original data. In addition to restoring the missing data, the inpainting technique can also be used to remove undesired objects. In this work, we address the problem of inpainting on surfaces through a new method based on dictionary learning and sparse coding. Our method learns the dictionary through the subdivision of the mesh into patches and rebuilds the mesh via a method of reconstruction inspired by the Non-local Means method on the computed sparse codes. One of the advantages of our method is that it is capable of filling the missing regions and simultaneously removes noise and enhances important features of the mesh. Moreover, the inpainting result is globally coherent as the representation based on the dictionaries captures all the geometric information in the transformed domain. We present two variations of the method: a direct one, in which the model is reconstructed and restored directly from the representation in the transformed domain and a second one, adaptive, in which the missing regions are recreated iteratively through the successive propagation of the sparse code computed in the hole boundaries, which guides the local reconstructions. The second method produces better results for large regions because the sparse codes of the patches are adapted according to the sparse codes of the boundary patches. Finally, we present and analyze experimental results that demonstrate the performance of our method compared to the literature
    • …
    corecore