583 research outputs found
Normals estimation for digital surfaces based on convolutions
International audienceIn this paper, we present a method that we call on-surface convolution which extends the classical notion of a 2D digital filter to the case of digital surfaces (following the cuberille model). We also define an averaging mask with local support which, when applied with the iterated convolution operator, behaves like an averaging with large support. The interesting property of the latter averaging is the way the resulting weights are distributed: given a digital surface obtained by discretization of a differentiable surface of R^3 , the masks isocurves are close to the Riemannian isodistance curves from the center of the mask. We eventually use the iterated averaging followed by convolutions with differentiation masks to estimate partial derivatives and then normal vectors over a surface. The number of iterations required to achieve a good estimate is determined experimentally on digitized spheres and tori. The precision of the normal estimation is also investigated according to the digitization step
Convolutions on digital surfaces: on the way iterated convolutions behave and preliminary results about curvature estimation
In [FoureyMalgouyres09] the authors present a generalized convolution operator for functions defined on digital surfaces. We provide here some extra material related to this notion. Some about the relative isotropy of the way a convolution kernel (or mask) grows when the convolution operator is iterated. We also provide preliminary results about a way to estimate curvatures on a digital surface, using the same convolution operator
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
Piecewise smooth reconstruction of normal vector field on digital data
International audienceWe propose a novel method to regularize a normal vector field defined on a digital surface (boundary of a set of voxels). When the digital surface is a digitization of a piecewise smooth manifold, our method localizes sharp features (edges) while regularizing the input normal vector field at the same time. It relies on the optimisation of a variant of the Ambrosio-Tortorelli functional, originally defined for denoising and contour extraction in image processing [AT90]. We reformulate this functional to digital surface processing thanks to discrete calculus operators. Experiments show that the output normal field is very robust to digitization artifacts or noise, and also fairly independent of the sampling resolution. The method allows the user to choose independently the amount of smoothing and the length of the set of discontinuities. Sharp and vanishing features are correctly delineated even on extremely damaged data. Finally, our method can be used to enhance considerably the output of state-of- the-art normal field estimators like Voronoi Covariance Measure [MOG11] or Randomized Hough Transform [BM12]
MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by Multi-Scale Edge Conditioning
Estimating surface normals from 3D point clouds is critical for various
applications, including surface reconstruction and rendering. While existing
methods for normal estimation perform well in regions where normals change
slowly, they tend to fail where normals vary rapidly. To address this issue, we
propose a novel approach called MSECNet, which improves estimation in normal
varying regions by treating normal variation modeling as an edge detection
problem. MSECNet consists of a backbone network and a multi-scale edge
conditioning (MSEC) stream. The MSEC stream achieves robust edge detection
through multi-scale feature fusion and adaptive edge detection. The detected
edges are then combined with the output of the backbone network using the edge
conditioning module to produce edge-aware representations. Extensive
experiments show that MSECNet outperforms existing methods on both synthetic
(PCPNet) and real-world (SceneNN) datasets while running significantly faster.
We also conduct various analyses to investigate the contribution of each
component in the MSEC stream. Finally, we demonstrate the effectiveness of our
approach in surface reconstruction.Comment: Accepted for ACM MM 202
Diffusion is All You Need for Learning on Surfaces
We introduce a new approach to deep learning on 3D surfaces such as meshes or
point clouds. Our key insight is that a simple learned diffusion layer can
spatially share data in a principled manner, replacing operations like
convolution and pooling which are complicated and expensive on surfaces. The
only other ingredients in our network are a spatial gradient operation, which
uses dot-products of derivatives to encode tangent-invariant filters, and a
multi-layer perceptron applied independently at each point. The resulting
architecture, which we call DiffusionNet, is remarkably simple, efficient, and
scalable. Continuously optimizing for spatial support avoids the need to pick
neighborhood sizes or filter widths a priori, or worry about their impact on
network size/training time. Furthermore, the principled, geometric nature of
these networks makes them agnostic to the underlying representation and
insensitive to discretization. In practice, this means significant robustness
to mesh sampling, and even the ability to train on a mesh and evaluate on a
point cloud. Our experiments demonstrate that these networks achieve
state-of-the-art results for a variety of tasks on both meshes and point
clouds, including surface classification, segmentation, and non-rigid
correspondence
GCN-Denoiser: Mesh Denoising with Graph Convolutional Networks
In this paper, we present GCN-Denoiser, a novel feature-preserving mesh
denoising method based on graph convolutional networks (GCNs). Unlike previous
learning-based mesh denoising methods that exploit hand-crafted or voxel-based
representations for feature learning, our method explores the structure of a
triangular mesh itself and introduces a graph representation followed by graph
convolution operations in the dual space of triangles. We show such a graph
representation naturally captures the geometry features while being lightweight
for both training and inference. To facilitate effective feature learning, our
network exploits both static and dynamic edge convolutions, which allow us to
learn information from both the explicit mesh structure and potential implicit
relations among unconnected neighbors. To better approximate an unknown noise
function, we introduce a cascaded optimization paradigm to progressively
regress the noise-free facet normals with multiple GCNs. GCN-Denoiser achieves
the new state-of-the-art results in multiple noise datasets, including CAD
models often containing sharp features and raw scan models with real noise
captured from different devices. We also create a new dataset called PrintData
containing 20 real scans with their corresponding ground-truth meshes for the
research community. Our code and data are available in
https://github.com/Jhonve/GCN-Denoiser.Comment: Accepted by ACM Transactions on Graphics 202
- …