The long runtime of high-fidelity partial differential equation (PDE) solvers
makes them unsuitable for time-critical applications. We propose to accelerate
PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches
reduce the dimensionality of discretized vector fields, our continuous
reduced-order modeling (CROM) approach builds a smooth, low-dimensional
manifold of the continuous vector fields themselves, not their discretization.
We represent this reduced manifold using continuously differentiable neural
fields, which may train on any and all available numerical solutions of the
continuous system, even when they are obtained using diverse methods or
discretizations. We validate our approach on an extensive range of PDEs with
training data from voxel grids, meshes, and point clouds. Compared to prior
discretization-dependent ROM methods, such as linear subspace proper orthogonal
decomposition (POD) and nonlinear manifold neural-network-based autoencoders,
CROM features higher accuracy, lower memory consumption, dynamically adaptive
resolutions, and applicability to any discretization. For equal latent space
dimension, CROM exhibits 79× and 49× better accuracy, and
39× and 132× smaller memory footprint, than POD and autoencoder
methods, respectively. Experiments demonstrate 109× and 89×
wall-clock speedups over unreduced models on CPUs and GPUs, respectively