131 research outputs found

    Local, Smooth, and Consistent Jacobi Set Simplification

    Full text link
    The relation between two Morse functions defined on a common domain can be studied in terms of their Jacobi set. The Jacobi set contains points in the domain where the gradients of the functions are aligned. Both the Jacobi set itself as well as the segmentation of the domain it induces have shown to be useful in various applications. Unfortunately, in practice functions often contain noise and discretization artifacts causing their Jacobi set to become unmanageably large and complex. While there exist techniques to simplify Jacobi sets, these are unsuitable for most applications as they lack fine-grained control over the process and heavily restrict the type of simplifications possible. In this paper, we introduce a new framework that generalizes critical point cancellations in scalar functions to Jacobi sets in two dimensions. We focus on simplifications that can be realized by smooth approximations of the corresponding functions and show how this implies simultaneously simplifying contiguous subsets of the Jacobi set. These extended cancellations form the atomic operations in our framework, and we introduce an algorithm to successively cancel subsets of the Jacobi set with minimal modifications according to some user-defined metric. We prove that the algorithm is correct and terminates only once no more local, smooth and consistent simplifications are possible. We disprove a previous claim on the minimal Jacobi set for manifolds with arbitrary genus and show that for simply connected domains, our algorithm reduces a given Jacobi set to its simplest configuration.Comment: 24 pages, 19 figure

    Single Model Uncertainty Estimation via Stochastic Data Centering

    Full text link
    We are interested in estimating the uncertainties of deep neural networks, which play an important role in many scientific and engineering problems. In this paper, we present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models, where the differences in predictions are a strong indicator of epistemic uncertainties. Using the neural tangent kernel (NTK), we demonstrate that this phenomena occurs in part because the NTK is not shift-invariant. Since this is achieved via a trivial input transformation, we show that it can therefore be approximated using just a single neural network -- using a technique that we call Δ−\Delta-UQ -- that estimates uncertainty around prediction by marginalizing out the effect of the biases. We show that Δ−\Delta-UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks -- outlier rejection, calibration under distribution shift, and sequential design optimization of black box functions

    Accelerating Flow Simulations using Online Dynamic Mode Decomposition

    Full text link
    We develop an on-the-fly reduced-order model (ROM) integrated with a flow simulation, gradually replacing a corresponding full-order model (FOM) of a physics solver. Unlike offline methods requiring a separate FOM-only simulation prior to model reduction, our approach constructs a ROM dynamically during the simulation, replacing the FOM when deemed credible. Dynamic mode decomposition (DMD) is employed for online ROM construction, with a single snapshot vector used for rank-1 updates in each iteration. Demonstrated on a flow over a cylinder with Re = 100, our hybrid FOM/ROM simulation is verified in terms of the Strouhal number, resulting in a 4.4 times speedup compared to the FOM solver.Comment: Presented at Machine Learning and the Physical Sciences Workshop, NeurIPS 202

    Improved Surrogates in Inertial Confinement Fusion with Manifold and Cycle Consistencies

    Full text link
    Neural networks have become very popular in surrogate modeling because of their ability to characterize arbitrary, high dimensional functions in a data driven fashion. This paper advocates for the training of surrogates that are consistent with the physical manifold -- i.e., predictions are always physically meaningful, and are cyclically consistent -- i.e., when the predictions of the surrogate, when passed through an independently trained inverse model give back the original input parameters. We find that these two consistencies lead to surrogates that are superior in terms of predictive performance, more resilient to sampling artifacts, and tend to be more data efficient. Using Inertial Confinement Fusion (ICF) as a test bed problem, we model a 1D semi-analytic numerical simulator and demonstrate the effectiveness of our approach. Code and data are available at https://github.com/rushilanirudh/macc/Comment: 10 pages, 6 figure
    • …
    corecore