217 research outputs found

    Neural ODEs with stochastic vector field mixtures

    Full text link
    It was recently shown that neural ordinary differential equation models cannot solve fundamental and seemingly straightforward tasks even with high-capacity vector field representations. This paper introduces two other fundamental tasks to the set that baseline methods cannot solve, and proposes mixtures of stochastic vector fields as a model class that is capable of solving these essential problems. Dynamic vector field selection is of critical importance for our model, and our approach is to propagate component uncertainty over the integration interval with a technique based on forward filtering. We also formalise several loss functions that encourage desirable properties on the trajectory paths, and of particular interest are those that directly encourage fewer expected function evaluations. Experimentally, we demonstrate that our model class is capable of capturing the natural dynamics of human behaviour; a notoriously volatile application area. Baseline approaches cannot adequately model this problem

    Physics Informed Machine Learning of SPH: Machine Learning Lagrangian Turbulence

    Full text link
    Smoothed particle hydrodynamics (SPH) is a mesh-free Lagrangian method for obtaining approximate numerical solutions of the equations of fluid dynamics; which has been widely applied to weakly- and strongly compressible turbulence in astrophysics and engineering applications. We present a learn-able hierarchy of parameterized and "physics-explainable" Lagrangian based fluid simulators using both physics based parameters and Neural Networks (NNs) as universal function approximators. This hierarchy of parameterized Lagrangian models gradually introduces more SPH based structure, which we show improves interpretability, generalizability (over larger ranges of time scales and Mach numbers), preservation of physical symmetries (corresponding to conservation of linear and angular momentum), and requires less training data. Our learning algorithm develops a mixed mode approach, mixing forward and reverse mode automatic differentiation with local sensitivity analyses to efficiently perform gradient based optimization. We train this hierarchy on both weakly compressible SPH and DNS data, and show that our physics informed learning method is capable of: (a) solving inverse problems over the physically interpretable parameter space, as well as over the space of NN parameters; (b) learning Lagrangian statistics of turbulence (interpolation); (c) combining Lagrangian trajectory based, probabilistic, and Eulerian field based loss functions; (d) extrapolating beyond training sets into more complex regimes of interest; (e) learning new parameterized smoothing kernels better suited to weakly compressible DNS turbulence data

    A General Framework for Uncertainty Quantification via Neural SDE-RNN

    Full text link
    Uncertainty quantification is a critical yet unsolved challenge for deep learning, especially for the time series imputation with irregularly sampled measurements. To tackle this problem, we propose a novel framework based on the principles of recurrent neural networks and neural stochastic differential equations for reconciling irregularly sampled measurements. We impute measurements at any arbitrary timescale and quantify the uncertainty in the imputations in a principled manner. Specifically, we derive analytical expressions for quantifying and propagating the epistemic and aleatoric uncertainty across time instants. Our experiments on the IEEE 37 bus test distribution system reveal that our framework can outperform state-of-the-art uncertainty quantification approaches for time-series data imputations.Comment: 7 pages, 3 figure

    Principled interpolation of Green's functions learned from data

    Full text link
    We present a data-driven approach to mathematically model physical systems whose governing partial differential equations are unknown, by learning their associated Green's function. The subject systems are observed by collecting input-output pairs of system responses under excitations drawn from a Gaussian process. Two methods are proposed to learn the Green's function. In the first method, we use the proper orthogonal decomposition (POD) modes of the system as a surrogate for the eigenvectors of the Green's function, and subsequently fit the eigenvalues, using data. In the second, we employ a generalization of the randomized singular value decomposition (SVD) to operators, in order to construct a low-rank approximation to the Green's function. Then, we propose a manifold interpolation scheme, for use in an offline-online setting, where offline excitation-response data, taken at specific model parameter instances, are compressed into empirical eigenmodes. These eigenmodes are subsequently used within a manifold interpolation scheme, to uncover other suitable eigenmodes at unseen model parameters. The approximation and interpolation numerical techniques are demonstrated on several examples in one and two dimensions
    • …
    corecore