314 research outputs found
CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural Representations
The long runtime of high-fidelity partial differential equation (PDE) solvers
makes them unsuitable for time-critical applications. We propose to accelerate
PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches
reduce the dimensionality of discretized vector fields, our continuous
reduced-order modeling (CROM) approach builds a smooth, low-dimensional
manifold of the continuous vector fields themselves, not their discretization.
We represent this reduced manifold using continuously differentiable neural
fields, which may train on any and all available numerical solutions of the
continuous system, even when they are obtained using diverse methods or
discretizations. We validate our approach on an extensive range of PDEs with
training data from voxel grids, meshes, and point clouds. Compared to prior
discretization-dependent ROM methods, such as linear subspace proper orthogonal
decomposition (POD) and nonlinear manifold neural-network-based autoencoders,
CROM features higher accuracy, lower memory consumption, dynamically adaptive
resolutions, and applicability to any discretization. For equal latent space
dimension, CROM exhibits 79 and 49 better accuracy, and
39 and 132 smaller memory footprint, than POD and autoencoder
methods, respectively. Experiments demonstrate 109 and 89
wall-clock speedups over unreduced models on CPUs and GPUs, respectively
Non-intrusive reduced-order modeling using convolutional autoencoders
The use of reduced-order models (ROMs) in physics-based modeling and
simulation almost always involves the use of linear reduced basis (RB) methods
such as the proper orthogonal decomposition (POD). For some nonlinear problems,
linear RB methods perform poorly, failing to provide an efficient subspace for
the solution space. The use of nonlinear manifolds for ROMs has gained traction
in recent years, showing increased performance for certain nonlinear problems
over linear methods. Deep learning has been popular to this end through the use
of autoencoders for providing a nonlinear trial manifold for the solution
space. In this work, we present a non-intrusive ROM framework for steady-state
parameterized partial differential equations (PDEs) that uses convolutional
autoencoders (CAEs) to provide a nonlinear solution manifold and is augmented
by Gaussian process regression (GPR) to approximate the expansion coefficients
of the reduced model. When applied to a numerical example involving the steady
incompressible Navier-Stokes equations solving a lid-driven cavity problem, it
is shown that the proposed ROM offers greater performance in prediction of
full-order states when compared to a popular method employing POD and GPR over
a number of ROM dimensions
Diffusion is All You Need for Learning on Surfaces
We introduce a new approach to deep learning on 3D surfaces such as meshes or
point clouds. Our key insight is that a simple learned diffusion layer can
spatially share data in a principled manner, replacing operations like
convolution and pooling which are complicated and expensive on surfaces. The
only other ingredients in our network are a spatial gradient operation, which
uses dot-products of derivatives to encode tangent-invariant filters, and a
multi-layer perceptron applied independently at each point. The resulting
architecture, which we call DiffusionNet, is remarkably simple, efficient, and
scalable. Continuously optimizing for spatial support avoids the need to pick
neighborhood sizes or filter widths a priori, or worry about their impact on
network size/training time. Furthermore, the principled, geometric nature of
these networks makes them agnostic to the underlying representation and
insensitive to discretization. In practice, this means significant robustness
to mesh sampling, and even the ability to train on a mesh and evaluate on a
point cloud. Our experiments demonstrate that these networks achieve
state-of-the-art results for a variety of tasks on both meshes and point
clouds, including surface classification, segmentation, and non-rigid
correspondence
Recommended from our members
3D Shape Understanding and Generation
In recent years, Machine Learning techniques have revolutionized solutions to longstanding image-based problems, like image classification, generation, semantic segmentation, object detection and many others. However, if we want to be able to build agents that can successfully interact with the real world, those techniques need to be capable of reasoning about the world as it truly is: a tridimensional space. There are two main challenges while handling 3D information in machine learning models. First, it is not clear what is the best 3D representation. For images, convolutional neural networks (CNNs) operating on raster images yield the best results in virtually all image-based benchmarks. For 3D data, the best combination of model and representation is still an open question. Second, 3D data is not available on the same scale as images – taking pictures is a common procedure in our daily lives, whereas capturing 3D content is an activity usually restricted to specialized professionals. This thesis is focused on addressing both of these issues. Which model and representation should we use for generating and recognizing 3D data? What are efficient ways of learning 3D representations from a few examples? Is it possible to leverage image data to build models capable of reasoning about the world in 3D?
Our research findings show that it is possible to build models that efficiently generate 3D shapes as irregularly structured representations. Those models require significantly less memory while generating higher quality shapes than the ones based on voxels and multi-view representations. We start by developing techniques to generate shapes represented as point clouds. This class of models leads to high quality reconstructions and better unsupervised feature learning. However, since point clouds are not amenable to editing and human manipulation, we also present models capable of generating shapes as sets of shape handles -- simpler primitives that summarize complex 3D shapes and were specifically designed for high-level tasks and user interaction. Despite their effectiveness, those approaches require some form of 3D supervision, which is scarce. We present multiple alternatives to this problem. First, we investigate how approximate convex decomposition techniques can be used as self-supervision to improve recognition models when only a limited number of labels are available. Second, we study how neural network architectures induce shape priors that can be used in multiple reconstruction tasks -- using both volumetric and manifold representations. In this regime, reconstruction is performed from a single example -- either a sparse point cloud or multiple silhouettes. Finally, we demonstrate how to train generative models of 3D shapes without using any 3D supervision by combining differentiable rendering techniques and Generative Adversarial Networks
- …