1,912 research outputs found

    Inferring solutions of differential equations using noisy multi-fidelity data

    Full text link
    For more than two centuries, solutions of differential equations have been obtained either analytically or numerically based on typically well-behaved forcing and boundary conditions for well-posed problems. We are changing this paradigm in a fundamental way by establishing an interface between probabilistic machine learning and differential equations. We develop data-driven algorithms for general linear equations using Gaussian process priors tailored to the corresponding integro-differential operators. The only observables are scarce noisy multi-fidelity data for the forcing and solution that are not required to reside on the domain boundary. The resulting predictive posterior distributions quantify uncertainty and naturally lead to adaptive solution refinement via active learning. This general framework circumvents the tyranny of numerical discretization as well as the consistency and stability issues of time-integration, and is scalable to high-dimensions.Comment: 19 pages, 3 figure

    Non-invasive Inference of Thrombus Material Properties with Physics-informed Neural Networks

    Full text link
    We employ physics-informed neural networks (PINNs) to infer properties of biological materials using synthetic data. In particular, we successfully apply PINNs on inferring the thrombus permeability and visco-elastic modulus from thrombus deformation data, which can be described by the fourth-order Cahn-Hilliard and Navier-Stokes Equations. In PINNs, the partial differential equations are encoded into the loss function, where partial derivatives can be obtained through automatic differentiation (AD). In addition, to tackling the challenge of calculating the fourth-order derivative in the Cahn-Hilliard equation with AD, we introduce an auxiliary network along with the main neural network to approximate the second-derivative of the energy potential term. Our model can predict simultaneously unknown parameters and velocity, pressure, and deformation gradient fields by merely training with partial information among all data, i.e., phase-field and pressure measurements, and is also highly flexible in sampling within the spatio-temporal domain for data acquisition. We validate our model by numerical solutions from the spectral/\textit{hp} element method (SEM) and demonstrate its robustness by training it with noisy measurements. Our results show that PINNs can accurately infer the material properties with noisy synthetic data, and thus they have great potential for inferring these properties from experimental multi-modality and multi-fidelity data

    Deep Learning of Vortex Induced Vibrations

    Full text link
    Vortex induced vibrations of bluff bodies occur when the vortex shedding frequency is close to the natural frequency of the structure. Of interest is the prediction of the lift and drag forces on the structure given some limited and scattered information on the velocity field. This is an inverse problem that is not straightforward to solve using standard computational fluid dynamics (CFD) methods, especially since no information is provided for the pressure. An even greater challenge is to infer the lift and drag forces given some dye or smoke visualizations of the flow field. Here we employ deep neural networks that are extended to encode the incompressible Navier-Stokes equations coupled with the structure's dynamic motion equation. In the first case, given scattered data in space-time on the velocity field and the structure's motion, we use four coupled deep neural networks to infer very accurately the structural parameters, the entire time-dependent pressure field (with no prior training data), and reconstruct the velocity vector field and the structure's dynamic motion. In the second case, given scattered data in space-time on a concentration field only, we use five coupled deep neural networks to infer very accurately the vector velocity field and all other quantities of interest as before. This new paradigm of inference in fluid mechanics for coupled multi-physics problems enables velocity and pressure quantification from flow snapshots in small subdomains and can be exploited for flow control applications and also for system identification.Comment: arXiv admin note: text overlap with arXiv:1808.0432

    Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations

    Full text link
    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from {\em small} data. In particular, we introduce \emph{hidden physics models}, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schr\"odinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data

    Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data

    Full text link
    We present hidden fluid mechanics (HFM), a physics informed deep learning framework capable of encoding an important class of physical laws governing fluid motions, namely the Navier-Stokes equations. In particular, we seek to leverage the underlying conservation laws (i.e., for mass, momentum, and energy) to infer hidden quantities of interest such as velocity and pressure fields merely from spatio-temporal visualizations of a passive scaler (e.g., dye or smoke), transported in arbitrarily complex domains (e.g., in human arteries or brain aneurysms). Our approach towards solving the aforementioned data assimilation problem is unique as we design an algorithm that is agnostic to the geometry or the initial and boundary conditions. This makes HFM highly flexible in choosing the spatio-temporal domain of interest for data acquisition as well as subsequent training and predictions. Consequently, the predictions made by HFM are among those cases where a pure machine learning strategy or a mere scientific computing approach simply cannot reproduce. The proposed algorithm achieves accurate predictions of the pressure and velocity fields in both two and three dimensional flows for several benchmark problems motivated by real-world applications. Our results demonstrate that this relatively simple methodology can be used in physical and biomedical problems to extract valuable quantitative information (e.g., lift and drag forces or wall shear stresses in arteries) for which direct measurements may not be possible

    Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations

    Full text link
    We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this second part of our two-part treatise, we focus on the problem of data-driven discovery of partial differential equations. Depending on whether the available data is scattered in space-time or arranged in fixed temporal snapshots, we introduce two main classes of algorithms, namely continuous time and discrete time models. The effectiveness of our approach is demonstrated using a wide range of benchmark problems in mathematical physics, including conservation laws, incompressible fluid flow, and the propagation of nonlinear shallow-water waves

    Deep Learning of Turbulent Scalar Mixing

    Full text link
    Based on recent developments in physics-informed deep learning and deep hidden physics models, we put forth a framework for discovering turbulence models from scattered and potentially noisy spatio-temporal measurements of the probability density function (PDF). The models are for the conditional expected diffusion and the conditional expected dissipation of a Fickian scalar described by its transported single-point PDF equation. The discovered model are appraised against exact solution derived by the amplitude mapping closure (AMC)/ Johnsohn-Edgeworth translation (JET) model of binary scalar mixing in homogeneous turbulence.Comment: arXiv admin note: text overlap with arXiv:1808.04327, arXiv:1808.0895

    Neural-net-induced Gaussian process regression for function approximation and PDE solution

    Full text link
    Neural-net-induced Gaussian process (NNGP) regression inherits both the high expressivity of deep neural networks (deep NNs) as well as the uncertainty quantification property of Gaussian processes (GPs). We generalize the current NNGP to first include a larger number of hyperparameters and subsequently train the model by maximum likelihood estimation. Unlike previous works on NNGP that targeted classification, here we apply the generalized NNGP to function approximation and to solving partial differential equations (PDEs). Specifically, we develop an analytical iteration formula to compute the covariance function of GP induced by deep NN with an error-function nonlinearity. We compare the performance of the generalized NNGP for function approximations and PDE solutions with those of GPs and fully-connected NNs. We observe that for smooth functions the generalized NNGP can yield the same order of accuracy with GP, while both NNGP and GP outperform deep NN. For non-smooth functions, the generalized NNGP is superior to GP and comparable or superior to deep NN

    Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations

    Full text link
    We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In this first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters

    Variational system identification of the partial differential equations governing pattern-forming physics: Inference under varying fidelity and noise

    Full text link
    We present a contribution to the field of system identification of partial differential equations (PDEs), with emphasis on discerning between competing mathematical models of pattern-forming physics. The motivation comes from developmental biology, where pattern formation is central to the development of any multicellular organism, and from materials physics, where phase transitions similarly lead to microstructure. In both these fields there is a collection of nonlinear, parabolic PDEs that, over suitable parameter intervals and regimes of physics, can resolve the patterns or microstructures with comparable fidelity. This observation frames the question of which PDE best describes the data at hand. This question is particularly compelling because identification of the closest representation to the true PDE, while constrained by the functional spaces considered relative to the data at hand, immediately delivers insights to the physics underlying the systems. While building on recent work that uses stepwise regression, we present advances that leverage the variational framework and statistical tests. We also address the influences of variable fidelity and noise in the data.Comment: To be appear in Computer Methods in Applied Mechanics and Engineerin
    • …
    corecore