314 research outputs found

    CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural Representations

    Full text link
    The long runtime of high-fidelity partial differential equation (PDE) solvers makes them unsuitable for time-critical applications. We propose to accelerate PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches reduce the dimensionality of discretized vector fields, our continuous reduced-order modeling (CROM) approach builds a smooth, low-dimensional manifold of the continuous vector fields themselves, not their discretization. We represent this reduced manifold using continuously differentiable neural fields, which may train on any and all available numerical solutions of the continuous system, even when they are obtained using diverse methods or discretizations. We validate our approach on an extensive range of PDEs with training data from voxel grids, meshes, and point clouds. Compared to prior discretization-dependent ROM methods, such as linear subspace proper orthogonal decomposition (POD) and nonlinear manifold neural-network-based autoencoders, CROM features higher accuracy, lower memory consumption, dynamically adaptive resolutions, and applicability to any discretization. For equal latent space dimension, CROM exhibits 79Ă—\times and 49Ă—\times better accuracy, and 39Ă—\times and 132Ă—\times smaller memory footprint, than POD and autoencoder methods, respectively. Experiments demonstrate 109Ă—\times and 89Ă—\times wall-clock speedups over unreduced models on CPUs and GPUs, respectively

    Non-intrusive reduced-order modeling using convolutional autoencoders

    Full text link
    The use of reduced-order models (ROMs) in physics-based modeling and simulation almost always involves the use of linear reduced basis (RB) methods such as the proper orthogonal decomposition (POD). For some nonlinear problems, linear RB methods perform poorly, failing to provide an efficient subspace for the solution space. The use of nonlinear manifolds for ROMs has gained traction in recent years, showing increased performance for certain nonlinear problems over linear methods. Deep learning has been popular to this end through the use of autoencoders for providing a nonlinear trial manifold for the solution space. In this work, we present a non-intrusive ROM framework for steady-state parameterized partial differential equations (PDEs) that uses convolutional autoencoders (CAEs) to provide a nonlinear solution manifold and is augmented by Gaussian process regression (GPR) to approximate the expansion coefficients of the reduced model. When applied to a numerical example involving the steady incompressible Navier-Stokes equations solving a lid-driven cavity problem, it is shown that the proposed ROM offers greater performance in prediction of full-order states when compared to a popular method employing POD and GPR over a number of ROM dimensions

    Diffusion is All You Need for Learning on Surfaces

    Full text link
    We introduce a new approach to deep learning on 3D surfaces such as meshes or point clouds. Our key insight is that a simple learned diffusion layer can spatially share data in a principled manner, replacing operations like convolution and pooling which are complicated and expensive on surfaces. The only other ingredients in our network are a spatial gradient operation, which uses dot-products of derivatives to encode tangent-invariant filters, and a multi-layer perceptron applied independently at each point. The resulting architecture, which we call DiffusionNet, is remarkably simple, efficient, and scalable. Continuously optimizing for spatial support avoids the need to pick neighborhood sizes or filter widths a priori, or worry about their impact on network size/training time. Furthermore, the principled, geometric nature of these networks makes them agnostic to the underlying representation and insensitive to discretization. In practice, this means significant robustness to mesh sampling, and even the ability to train on a mesh and evaluate on a point cloud. Our experiments demonstrate that these networks achieve state-of-the-art results for a variety of tasks on both meshes and point clouds, including surface classification, segmentation, and non-rigid correspondence
    • …
    corecore