6,891 research outputs found
Repairing triangle meshes built from scanned point cloud
The Reverse Engineering process consists of a succession of operations that aim at creating a digital representation of a physical model. The reconstructed geometric model is often a triangle mesh built from a point cloud acquired with a scanner. Depending on both the object complexity and the scanning process, some areas of the object outer surface may never be accessible, thus inducing some deficiencies in the point cloud and, as a consequence, some holes in the resulting mesh. This is simply not acceptable in an integrated design process where the geometric models are often shared between the various applications (e.g. design, simulation, manufacturing). In this paper, we propose a complete toolbox to fill in these undesirable holes. The hole contour is first cleaned to remove badly-shaped triangles that are due to the scanner noise. A topological grid is then inserted and deformed to satisfy blending conditions with the surrounding mesh. In our approach, the shape of the inserted mesh results from the minimization of a quadratic function based on a linear mechanical model that is used to approximate the curvature variation between the inner and surrounding meshes. Additional geometric constraints can also be specified to further shape the inserted mesh. The proposed approach is illustrated with some examples coming from our prototype software
Towards recovery of complex shapes in meshes using digital images for reverse engineering applications
When an object owns complex shapes, or when its outer surfaces are simply inaccessible, some of its parts may not be captured during its reverse engineering. These deficiencies in the point cloud result in a set of holes in the reconstructed mesh. This paper deals with the use of information extracted from digital images to recover missing areas of a physical object. The proposed algorithm fills in these holes by solving an optimization problem that combines two kinds of information: (1) the geometric information available on the surrounding of the holes, (2) the information contained in an image of the real object. The constraints come from the image irradiance equation, a first-order non-linear partial differential equation that links the position of the mesh vertices to the light intensity of the image pixels. The blending conditions are satisfied by using an objective function based on a mechanical model of bar network that simulates the curvature evolution over the mesh. The inherent shortcomings both to the current holefilling algorithms and the resolution of the image irradiance equations are overcom
Dictionary Learning-based Inpainting on Triangular Meshes
The problem of inpainting consists of filling missing or damaged regions in
images and videos in such a way that the filling pattern does not produce
artifacts that deviate from the original data. In addition to restoring the
missing data, the inpainting technique can also be used to remove undesired
objects. In this work, we address the problem of inpainting on surfaces through
a new method based on dictionary learning and sparse coding. Our method learns
the dictionary through the subdivision of the mesh into patches and rebuilds
the mesh via a method of reconstruction inspired by the Non-local Means method
on the computed sparse codes. One of the advantages of our method is that it is
capable of filling the missing regions and simultaneously removes noise and
enhances important features of the mesh. Moreover, the inpainting result is
globally coherent as the representation based on the dictionaries captures all
the geometric information in the transformed domain. We present two variations
of the method: a direct one, in which the model is reconstructed and restored
directly from the representation in the transformed domain and a second one,
adaptive, in which the missing regions are recreated iteratively through the
successive propagation of the sparse code computed in the hole boundaries,
which guides the local reconstructions. The second method produces better
results for large regions because the sparse codes of the patches are adapted
according to the sparse codes of the boundary patches. Finally, we present and
analyze experimental results that demonstrate the performance of our method
compared to the literature
Self-correction of 3D reconstruction from multi-view stereo images
We present a self-correction approach to improving the
3D reconstruction of a multi-view 3D photogrammetry system.
The self-correction approach has been able to repair
the reconstructed 3D surface damaged by depth discontinuities.
Due to self-occlusion, multi-view range images
have to be acquired and integrated into a watertight nonredundant
mesh model in order to cover the extended surface
of an imaged object. The integrated surface often suffers
from âdentâ artifacts produced by depth discontinuities
in the multi-view range images. In this paper we propose
a novel approach to correcting the 3D integrated surface
such that the dent artifacts can be repaired automatically.
We show examples of 3D reconstruction to demonstrate the
improvement that can be achieved by the self-correction
approach. This self-correction approach can be extended
to integrate range images obtained from alternative range
capture devices
Fractal Holography: a geometric re-interpretation of cosmological large scale structure
The fractal dimension of large-scale galaxy clustering has been demonstrated
to be roughly from a wide range of redshift surveys. If correct,
this statistic is of interest for two main reasons: fractal scaling is an
implicit representation of information content, and also the value itself is a
geometric signature of area. It is proposed that the fractal distribution of
galaxies may thus be interpreted as a signature of holography (``fractal
holography''), providing more support for current theories of holographic
cosmologies. Implications for entropy bounds are addressed. In particular,
because of spatial scale invariance in the matter distribution, it is shown
that violations of the spherical entropy bound can be removed. This holographic
condition instead becomes a rigid constraint on the nature of the matter
density and distribution in the Universe. Inclusion of a dark matter
distribution is also discussed, based on theoretical considerations of possible
universal CDM density profiles.Comment: 13 pp, LaTeX. Revised version; to appear in JCA
Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis
We introduce a data-driven approach to complete partial 3D shapes through a
combination of volumetric deep neural networks and 3D shape synthesis. From a
partially-scanned input shape, our method first infers a low-resolution -- but
complete -- output. To this end, we introduce a 3D-Encoder-Predictor Network
(3D-EPN) which is composed of 3D convolutional layers. The network is trained
to predict and fill in missing data, and operates on an implicit surface
representation that encodes both known and unknown space. This allows us to
predict global structure in unknown areas at high accuracy. We then correlate
these intermediary results with 3D geometry from a shape database at test time.
In a final pass, we propose a patch-based 3D shape synthesis method that
imposes the 3D geometry from these retrieved shapes as constraints on the
coarsely-completed mesh. This synthesis process enables us to reconstruct
fine-scale detail and generate high-resolution output while respecting the
global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms
state-of-the-art completion method, the main contribution in our work lies in
the combination of a data-driven shape predictor and analytic 3D shape
synthesis. In our results, we show extensive evaluations on a newly-introduced
shape completion benchmark for both real-world and synthetic data
Repairing triangle meshes built from scanned point cloud
International audienceThe Reverse Engineering process consists of a succession of operations that aim at creating a digital representation of a physical model. The reconstructed geometric model is often a triangle mesh built from a point cloud acquired with a scanner. Depending on both the object complexity and the scanning process, some areas of the object outer surface may never be accessible, thus inducing some deficiencies in the point cloud and, as a consequence, some holes in the resulting mesh. This is simply not acceptable in an integrated design process where the geometric models are often shared between the various applications (e.g. design, simulation, manufacturing). In this paper, we propose a complete toolbox to fill in these undesirable holes. The hole contour is first cleaned to remove badly-shaped triangles that are due to the scanner noise. A topological grid is then inserted and deformed to satisfy blending conditions with the surrounding mesh. In our approach, the shape of the inserted mesh results from the minimization of a quadratic function based on a linear mechanical model that is used to approximate the curvature variation between the inner and surrounding meshes. Additional geometric constraints can also be specified to further shape the inserted mesh. The proposed approach is illustrated with some examples coming from our prototype software
Repairing triangle meshes built from scanned point cloud
International audienceThe Reverse Engineering process consists of a succession of operations that aim at creating a digital representation of a physical model. The reconstructed geometric model is often a triangle mesh built from a point cloud acquired with a scanner. Depending on both the object complexity and the scanning process, some areas of the object outer surface may never be accessible, thus inducing some deficiencies in the point cloud and, as a consequence, some holes in the resulting mesh. This is simply not acceptable in an integrated design process where the geometric models are often shared between the various applications (e.g. design, simulation, manufacturing). In this paper, we propose a complete toolbox to fill in these undesirable holes. The hole contour is first cleaned to remove badly-shaped triangles that are due to the scanner noise. A topological grid is then inserted and deformed to satisfy blending conditions with the surrounding mesh. In our approach, the shape of the inserted mesh results from the minimization of a quadratic function based on a linear mechanical model that is used to approximate the curvature variation between the inner and surrounding meshes. Additional geometric constraints can also be specified to further shape the inserted mesh. The proposed approach is illustrated with some examples coming from our prototype software
Recommended from our members
Heritage Reproduction in the Age of High-Resolution Scanning:A Critical Evaluation of Digital Infilling Methods for Historic Preservation
High-definition digital scanning has established itself as a useful tool for documenting cultural heritage in the twenty-first century. Proponents of surveying technology are hailing the use of digital fact-based 3D models as valuable tools for recording, analyzing and safeguarding items of cultural importance. Methods for digitally filling holes have not yet been considered through the lens of historic preservation. No modeling technique is error-free and understanding how heritage professionals are addressing lacunae is vital for understanding digital heritage objects resulting from 3D scanning hardware. Frameworks exist for working with scanned data, but they define general principles for a broad range of applications and do not provide any guidelines or strategies of how to comply with them practically. This thesis is a comparative evaluation of current practices of in-filling digital lacunae that attempts to establish which methods are best suited to the following historic preservation practices: documentation, Interpretation graphics, Long-term monitoring, digital restoration, physical fabrication
- âŠ