3,071 research outputs found

    Mixed formulation of physics-informed neural networks for thermo-mechanically coupled systems and heterogeneous domains

    Full text link
    Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems by defining loss functions of neural networks based on governing equations, boundary conditions, and initial conditions. Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy, especially when there are heterogeneity and variable jumps in the domain. This new approach is called the mixed formulation for PINNs, which takes ideas from the mixed finite element method. In this method, the PDE is reformulated as a system of equations where the primary unknowns are the fluxes or gradients of the solution, and the secondary unknowns are the solution itself. In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations. Additionally, we discuss both sequential and fully coupled unsupervised training and compare their accuracy and computational cost. To improve the accuracy of the network, we incorporate hard boundary constraints to ensure valid predictions. We then investigate how different optimizers and architectures affect accuracy and efficiency. Finally, we introduce a simple approach for parametric learning that is similar to transfer learning. This approach combines data and physics to address the limitations of PINNs regarding computational cost and improves the network's ability to predict the response of the system for unseen cases. The outcomes of this work will be useful for many other engineering applications where deep learning is employed on multiple coupled systems of equations for fast and reliable computations

    Functional order-reduced Gaussian Processes based machine-learning emulators for probabilistic constitutive modelling

    Get PDF
    Machine learning methods have been extensively explored for constitutive relation that is essential in material and structural analyses. However, most existing approaches rely on neural networks, which lack interpretability and treat stress–strain data as discrete values, disregarding their inherent continuous nature. Therefore, this paper proposes novel functional order-reduced Gaussian Processes emulators, which are more interpretable by leveraging Bayesian theory and account for the uncertainty arising from microstructural homogenisation, providing the non-parametric probabilistic and continuous constitutive modelling of composite microstructure undergoing fracture/failure. Its most salient point is the capability of predicting the continuous and probabilistic stress–strain function only using limited (i.e., 400) samples, where the uncertain data is high-dimensional in large-scale composite (up to 250,000). An illustrative example demonstrates that the emulator accurately captures the probabilistic constitutive relation, providing insights into the maximum stress and strain values. Notably, the results highlight the significant variation in maximum stress due to fibre uncertainty. Moreover, the example showcases that as the fibre volume fraction increases from 0.4 to 0.6, the maximum stress tends to increase, while the maximum strain decreases, namely, more fibre results in higher strength and stiffness.</p

    Functional order-reduced Gaussian Processes based machine-learning emulators for probabilistic constitutive modelling

    Get PDF
    Machine learning methods have been extensively explored for constitutive relation that is essential in material and structural analyses. However, most existing approaches rely on neural networks, which lack interpretability and treat stress–strain data as discrete values, disregarding their inherent continuous nature. Therefore, this paper proposes novel functional order-reduced Gaussian Processes emulators, which are more interpretable by leveraging Bayesian theory and account for the uncertainty arising from microstructural homogenisation, providing the non-parametric probabilistic and continuous constitutive modelling of composite microstructure undergoing fracture/failure. Its most salient point is the capability of predicting the continuous and probabilistic stress–strain function only using limited (i.e., 400) samples, where the uncertain data is high-dimensional in large-scale composite (up to 250,000). An illustrative example demonstrates that the emulator accurately captures the probabilistic constitutive relation, providing insights into the maximum stress and strain values. Notably, the results highlight the significant variation in maximum stress due to fibre uncertainty. Moreover, the example showcases that as the fibre volume fraction increases from 0.4 to 0.6, the maximum stress tends to increase, while the maximum strain decreases, namely, more fibre results in higher strength and stiffness.</p

    Physics-Informed Computer Vision: A Review and Perspectives

    Full text link
    Incorporation of physical information in machine learning frameworks are opening and transforming many application domains. Here the learning process is augmented through the induction of fundamental knowledge and governing physical laws. In this work we explore their utility for computer vision tasks in interpreting and understanding visual data. We present a systematic literature review of formulation and approaches to computer vision tasks guided by physical laws. We begin by decomposing the popular computer vision pipeline into a taxonomy of stages and investigate approaches to incorporate governing physical equations in each stage. Existing approaches in each task are analyzed with regard to what governing physical processes are modeled, formulated and how they are incorporated, i.e. modify data (observation bias), modify networks (inductive bias), and modify losses (learning bias). The taxonomy offers a unified view of the application of the physics-informed capability, highlighting where physics-informed learning has been conducted and where the gaps and opportunities are. Finally, we highlight open problems and challenges to inform future research. While still in its early days, the study of physics-informed computer vision has the promise to develop better computer vision models that can improve physical plausibility, accuracy, data efficiency and generalization in increasingly realistic applications

    Physics-Informed Deep Neural Operator Networks

    Full text link
    Standard neural networks can approximate general nonlinear operators, represented either explicitly by a combination of mathematical operators, e.g., in an advection-diffusion-reaction partial differential equation, or simply as a black box, e.g., a system-of-systems. The first neural operator was the Deep Operator Network (DeepONet), proposed in 2019 based on rigorous approximation theory. Since then, a few other less general operators have been published, e.g., based on graph neural networks or Fourier transforms. For black box systems, training of neural operators is data-driven only but if the governing equations are known they can be incorporated into the loss function during training to develop physics-informed neural operators. Neural operators can be used as surrogates in design problems, uncertainty quantification, autonomous systems, and almost in any application requiring real-time inference. Moreover, independently pre-trained DeepONets can be used as components of a complex multi-physics system by coupling them together with relatively light training. Here, we present a review of DeepONet, the Fourier neural operator, and the graph neural operator, as well as appropriate extensions with feature expansions, and highlight their usefulness in diverse applications in computational mechanics, including porous media, fluid mechanics, and solid mechanics.Comment: 33 pages, 14 figures. arXiv admin note: text overlap with arXiv:2204.00997 by other author

    Feature Enforcing PINN (FE-PINN): A Framework to Learn the Underlying-Physics Features Before Target Task

    Full text link
    In this work, a new data-free framework called Feature Enforcing Physics Informed Neural Network (FE-PINN) is introduced. This framework is capable of learning the underlying pattern of any problem with low computational cost before the main training loop. The loss function of vanilla PINN due to the existence of two terms of partial differential residuals and boundary condition mean squared error is imbalanced. FE-PINN solves this challenge with just one minute of training instead of time-consuming hyperparameter tuning for loss function that can take hours. The FE-PINN accomplishes this process by performing a sequence of sub-tasks. The first sub-task learns useful features about the underlying physics. Then, the model trains on the target task to refine the calculations. FE-PINN is applied to three benchmarks, flow over a cylinder, 2D heat conduction, and an inverse problem of calculating inlet velocity. FE-PINN can solve each case with, 15x, 2x, and 5x speed up accordingly. Another advantage of FE-PINN is that reaching lower order of value for loss function is systematically possible. In this study, it was possible to reach a loss value near 1e-5 which is challenging for vanilla PINN. FE-PINN also has a smooth convergence process which allows for utilizing higher learning rates in comparison to vanilla PINN. This framework can be used as a fast, accurate tool for solving a wide range of Partial Differential Equations (PDEs) across various fields.Comment: 23 pages, 8 figures, 3 table

    Learning Generic Solutions for Multiphase Transport in Porous Media via the Flux Functions Operator

    Full text link
    Traditional numerical schemes for simulating fluid flow and transport in porous media can be computationally expensive. Advances in machine learning for scientific computing have the potential to help speed up the simulation time in many scientific and engineering fields. DeepONet has recently emerged as a powerful tool for accelerating the solution of partial differential equations (PDEs) by learning operators (mapping between function spaces) of PDEs. In this work, we learn the mapping between the space of flux functions of the Buckley-Leverett PDE and the space of solutions (saturations). We use Physics-Informed DeepONets (PI-DeepONets) to achieve this mapping without any paired input-output observations, except for a set of given initial or boundary conditions; ergo, eliminating the expensive data generation process. By leveraging the underlying physical laws via soft penalty constraints during model training, in a manner similar to Physics-Informed Neural Networks (PINNs), and a unique deep neural network architecture, the proposed PI-DeepONet model can predict the solution accurately given any type of flux function (concave, convex, or non-convex) while achieving up to four orders of magnitude improvements in speed over traditional numerical solvers. Moreover, the trained PI-DeepONet model demonstrates excellent generalization qualities, rendering it a promising tool for accelerating the solution of transport problems in porous media.Comment: 23 pages, 11 figure

    FE-PINN Optimization

    Get PDF
    This research enhances a novel finite element physics-informed neural network (FE-PINN) framework in order to optimize efficiency and results. The enhancements include tuning hyperparameters and considering new methodology in constructing the model architecture. This study achieved near convergence of model prediction to actual data and successfully incorporates finite element discretization into a neural network model
    • …
    corecore