109 research outputs found
Discovering Causal Relations and Equations from Data
Physics is a field of science that has traditionally used the scientific
method to answer questions about why natural phenomena occur and to make
testable models that explain the phenomena. Discovering equations, laws and
principles that are invariant, robust and causal explanations of the world has
been fundamental in physical sciences throughout the centuries. Discoveries
emerge from observing the world and, when possible, performing interventional
studies in the system under study. With the advent of big data and the use of
data-driven methods, causal and equation discovery fields have grown and made
progress in computer science, physics, statistics, philosophy, and many applied
fields. All these domains are intertwined and can be used to discover causal
relations, physical laws, and equations from observational data. This paper
reviews the concepts, methods, and relevant works on causal and equation
discovery in the broad field of Physics and outlines the most important
challenges and promising future lines of research. We also provide a taxonomy
for observational causal and equation discovery, point out connections, and
showcase a complete set of case studies in Earth and climate sciences, fluid
dynamics and mechanics, and the neurosciences. This review demonstrates that
discovering fundamental laws and causal relations by observing natural
phenomena is being revolutionised with the efficient exploitation of
observational data, modern machine learning algorithms and the interaction with
domain knowledge. Exciting times are ahead with many challenges and
opportunities to improve our understanding of complex systems.Comment: 137 page
MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by Multi-Scale Edge Conditioning
Estimating surface normals from 3D point clouds is critical for various
applications, including surface reconstruction and rendering. While existing
methods for normal estimation perform well in regions where normals change
slowly, they tend to fail where normals vary rapidly. To address this issue, we
propose a novel approach called MSECNet, which improves estimation in normal
varying regions by treating normal variation modeling as an edge detection
problem. MSECNet consists of a backbone network and a multi-scale edge
conditioning (MSEC) stream. The MSEC stream achieves robust edge detection
through multi-scale feature fusion and adaptive edge detection. The detected
edges are then combined with the output of the backbone network using the edge
conditioning module to produce edge-aware representations. Extensive
experiments show that MSECNet outperforms existing methods on both synthetic
(PCPNet) and real-world (SceneNN) datasets while running significantly faster.
We also conduct various analyses to investigate the contribution of each
component in the MSEC stream. Finally, we demonstrate the effectiveness of our
approach in surface reconstruction.Comment: Accepted for ACM MM 202
Point cloud geometry compression using neural implicit representations
openIn recent years, the increasing prominence of 3D point clouds in various applications has led to an escalating need for efficient storage and transmission methods. The sheer size of these point cloud datasets presents challenges in rendering, transmission, and general usability. This thesis introduces a novel approach to point cloud geometry compression leveraging neural implicit representations, specifically through the use of a DiGS network model. By training this model on a single point cloud, we achieve a compact neural representation of its geometry. Notably, this representation allows for the reconstruction of the point cloud with an arbitrary resolution. After training a reconstructing network, dynamic quantization is applied on the trained weights, significantly reducing its overall bitrate without strongly compromising the quality of the reconstructed point cloud. A dequantization is then used to rebuild a high-fidelity representation of the original point cloud. Our experimental results demonstrate the efficacy of this approach in terms of compression ratios and reconstruction quality, assessed using PSNR relative to the bitrate. This research provides a promising direction for efficient point cloud geometry storage and transmission, addressing some of the growing demands of the 3D data era
Multi-Sample Consensus Driven Unsupervised Normal Estimation for 3D Point Clouds
Deep normal estimators have made great strides on synthetic benchmarks.
Unfortunately, their performance dramatically drops on the real scan data since
they are supervised only on synthetic datasets. The point-wise annotation of
ground truth normals is vulnerable to inefficiency and inaccuracies, which
totally makes it impossible to build perfect real datasets for supervised deep
learning. To overcome the challenge, we propose a multi-sample consensus
paradigm for unsupervised normal estimation. The paradigm consists of
multi-candidate sampling, candidate rejection, and mode determination. The
latter two are driven by neighbor point consensus and candidate consensus
respectively. Two primary implementations of the paradigm, MSUNE and MSUNE-Net,
are proposed. MSUNE minimizes a candidate consensus loss in mode determination.
As a robust optimization method, it outperforms the cutting-edge supervised
deep learning methods on real data at the cost of longer runtime for sampling
enough candidate normals for each query point. MSUNE-Net, the first
unsupervised deep normal estimator as far as we know, significantly promotes
the multi-sample consensus further. It transfers the three online stages of
MSUNE to offline training. Thereby its inference time is 100 times faster.
Besides that, more accurate inference is achieved, since the candidates of
query points from similar patches can form a sufficiently large candidate set
implicitly in MSUNE-Net. Comprehensive experiments demonstrate that the two
proposed unsupervised methods are noticeably superior to some supervised deep
normal estimators on the most common synthetic dataset. More importantly, they
show better generalization ability and outperform all the SOTA conventional and
deep methods on three real datasets: NYUV2, KITTI, and a dataset from PCV [1]
Alternately denoising and reconstructing unoriented point sets
We propose a new strategy to bridge point cloud denoising and surface
reconstruction by alternately updating the denoised point clouds and the
reconstructed surfaces. In Poisson surface reconstruction, the implicit
function is generated by a set of smooth basis functions centered at the
octnodes. When the octree depth is properly selected, the reconstructed surface
is a good smooth approximation of the noisy point set. Our method projects the
noisy points onto the surface and alternately reconstructs and projects the
point set. We use the iterative Poisson surface reconstruction (iPSR) to
support unoriented surface reconstruction. Our method iteratively performs iPSR
and acts as an outer loop of iPSR. Considering that the octree depth
significantly affects the reconstruction results, we propose an adaptive depth
selection strategy to ensure an appropriate depth choice. To manage the
oversmoothing phenomenon near the sharp features, we propose a
-projection method, which means to project the noisy points onto the
surface with an individual control coefficient for each point.
The coefficients are determined through a Voronoi-based feature detection
method. Experimental results show that our method achieves high performance in
point cloud denoising and unoriented surface reconstruction within different
noise scales, and exhibits well-rounded performance in various types of inputs.
The source code is available
at~\url{https://github.com/Submanifold/AlterUpdate}.Comment: Accepted by Computers & Graphics from CAD/Graphics 202
Neural-Singular-Hessian: Implicit Neural Representation of Unoriented Point Clouds by Enforcing Singular Hessian
Neural implicit representation is a promising approach for reconstructing
surfaces from point clouds. Existing methods combine various regularization
terms, such as the Eikonal and Laplacian energy terms, to enforce the learned
neural function to possess the properties of a Signed Distance Function (SDF).
However, inferring the actual topology and geometry of the underlying surface
from poor-quality unoriented point clouds remains challenging. In accordance
with Differential Geometry, the Hessian of the SDF is singular for points
within the differential thin-shell space surrounding the surface. Our approach
enforces the Hessian of the neural implicit function to have a zero determinant
for points near the surface. This technique aligns the gradients for a
near-surface point and its on-surface projection point, producing a rough but
faithful shape within just a few iterations. By annealing the weight of the
singular-Hessian term, our approach ultimately produces a high-fidelity
reconstruction result. Extensive experimental results demonstrate that our
approach effectively suppresses ghost geometry and recovers details from
unoriented point clouds with better expressiveness than existing fitting-based
methods
CircNet: Meshing 3D Point Clouds with Circumcenter Detection
Reconstructing 3D point clouds into triangle meshes is a key problem in
computational geometry and surface reconstruction. Point cloud triangulation
solves this problem by providing edge information to the input points. Since no
vertex interpolation is involved, it is beneficial to preserve sharp details on
the surface. Taking advantage of learning-based techniques in triangulation,
existing methods enumerate the complete combinations of candidate triangles,
which is both complex and inefficient. In this paper, we leverage the duality
between a triangle and its circumcenter, and introduce a deep neural network
that detects the circumcenters to achieve point cloud triangulation.
Specifically, we introduce multiple anchor priors to divide the neighborhood
space of each point. The neural network then learns to predict the presences
and locations of circumcenters under the guidance of those anchors. We extract
the triangles dual to the detected circumcenters to form a primitive mesh, from
which an edge-manifold mesh is produced via simple post-processing. Unlike
existing learning-based triangulation methods, the proposed method bypasses an
exhaustive enumeration of triangle combinations and local surface
parameterization. We validate the efficiency, generalization, and robustness of
our method on prominent datasets of both watertight and open surfaces. The code
and trained models are provided at https://github.com/Ruitao-L/CircNet.Comment: accepted to ICLR202
Neural Gradient Learning and Optimization for Oriented Point Normal Estimation
We propose Neural Gradient Learning (NGL), a deep learning approach to learn
gradient vectors with consistent orientation from 3D point clouds for normal
estimation. It has excellent gradient approximation properties for the
underlying geometry of the data. We utilize a simple neural network to
parameterize the objective function to produce gradients at points using a
global implicit representation. However, the derived gradients usually drift
away from the ground-truth oriented normals due to the lack of local detail
descriptions. Therefore, we introduce Gradient Vector Optimization (GVO) to
learn an angular distance field based on local plane geometry to refine the
coarse gradient vectors. Finally, we formulate our method with a two-phase
pipeline of coarse estimation followed by refinement. Moreover, we integrate
two weighting functions, i.e., anisotropic kernel and inlier score, into the
optimization to improve the robust and detail-preserving performance. Our
method efficiently conducts global gradient approximation while achieving
better accuracy and generalization ability of local feature description. This
leads to a state-of-the-art normal estimator that is robust to noise, outliers
and point density variations. Extensive evaluations show that our method
outperforms previous works in both unoriented and oriented normal estimation on
widely used benchmarks. The source code and pre-trained models are available at
https://github.com/LeoQLi/NGLO.Comment: accepted by SIGGRAPH Asia 202
What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives
Intelligent Mesh Generation (IMG) represents a novel and promising field of
research, utilizing machine learning techniques to generate meshes. Despite its
relative infancy, IMG has significantly broadened the adaptability and
practicality of mesh generation techniques, delivering numerous breakthroughs
and unveiling potential future pathways. However, a noticeable void exists in
the contemporary literature concerning comprehensive surveys of IMG methods.
This paper endeavors to fill this gap by providing a systematic and thorough
survey of the current IMG landscape. With a focus on 113 preliminary IMG
methods, we undertake a meticulous analysis from various angles, encompassing
core algorithm techniques and their application scope, agent learning
objectives, data types, targeted challenges, as well as advantages and
limitations. We have curated and categorized the literature, proposing three
unique taxonomies based on key techniques, output mesh unit elements, and
relevant input data types. This paper also underscores several promising future
research directions and challenges in IMG. To augment reader accessibility, a
dedicated IMG project page is available at
\url{https://github.com/xzb030/IMG_Survey}
Phase transition for the vacant set of random walk and random interlacements
We consider the set of points visited by the random walk on the discrete
torus , for , at times of order ,
for a parameter in the large- limit. We prove that the vacant set left
by the walk undergoes a phase transition across a non-degenerate critical value
, as follows. For all , the vacant set contains a giant
connected component with high probability, which has a non-vanishing asymptotic
density and satisfies a certain local uniqueness property. In stark contrast,
for all the vacant set scatters into tiny connected components. Our
results further imply that the threshold precisely equals the critical
value, introduced by Sznitman in arXiv:0704.2560, which characterizes the
percolation transition of the corresponding local limit, the vacant set of
random interlacements on . Our findings also yield the analogous
infinite-volume result, i.e. the long purported equality of three critical
parameters , and naturally associated to the vacant set
of random interlacements.Comment: 94 pages, 2 figure
- …