21 research outputs found

    LNO: Laplace Neural Operator for Solving Differential Equations

    Full text link
    We introduce the Laplace neural operator (LNO), which leverages the Laplace transform to decompose the input space. Unlike the Fourier Neural Operator (FNO), LNO can handle non-periodic signals, account for transient responses, and exhibit exponential convergence. LNO incorporates the pole-residue relationship between the input and the output space, enabling greater interpretability and improved generalization ability. Herein, we demonstrate the superior approximation accuracy of a single Laplace layer in LNO over four Fourier modules in FNO in approximating the solutions of three ODEs (Duffing oscillator, driven gravity pendulum, and Lorenz system) and three PDEs (Euler-Bernoulli beam, diffusion equation, and reaction-diffusion system). Notably, LNO outperforms FNO in capturing transient responses in undamped scenarios. For the linear Euler-Bernoulli beam and diffusion equation, LNO's exact representation of the pole-residue formulation yields significantly better results than FNO. For the nonlinear reaction-diffusion system, LNO's errors are smaller than those of FNO, demonstrating the effectiveness of using system poles and residues as network parameters for operator learning. Overall, our results suggest that LNO represents a promising new approach for learning neural operators that map functions between infinite-dimensional spaces.Comment: 18 pages, 8 figures, 2 table

    A robust monolithic solver for phase-field fracture integrated with fracture energy based arc-length method and under-relaxation

    Get PDF
    The phase-field fracture free-energy functional is non-convex with respect to the displacement and the phase field. This results in a poor performance of the conventional monolithic solvers like the Newton-Raphson method. In order to circumvent this issue, researchers opt for the alternate minimization (staggered) solvers. Staggered solvers are robust for the phase-field based fracture simulations as the displacement and the phase-field sub-problems are convex in nature. Nevertheless, the staggered solver requires very large number of iterations (of the order of thousands) to converge. In this work, a robust monolithic solver is presented for the phase-field fracture problem. The solver adopts a fracture energy-based arc-length method and an adaptive under-relaxation scheme. The arc-length method enables the simulation to overcome critical points (snap-back, snap-through instabilities) during the loading of a specimen. The use of an under-relaxation scheme stabilizes the solver by preventing the divergence due to an ill-behaving stiffness matrix. The efficiency of the proposed solver is further amplified with an adaptive mesh refinement scheme based on PHT-splines within the framework of isogeometric analysis. The numerical examples presented in the manuscript demonstrates the efficacy of the solver. All the codes and data-sets accompanying this work will be made available on GitHub (https://github.com/rbharali/IGAFrac)

    Physics-Informed Deep Neural Operator Networks

    Full text link
    Standard neural networks can approximate general nonlinear operators, represented either explicitly by a combination of mathematical operators, e.g., in an advection-diffusion-reaction partial differential equation, or simply as a black box, e.g., a system-of-systems. The first neural operator was the Deep Operator Network (DeepONet), proposed in 2019 based on rigorous approximation theory. Since then, a few other less general operators have been published, e.g., based on graph neural networks or Fourier transforms. For black box systems, training of neural operators is data-driven only but if the governing equations are known they can be incorporated into the loss function during training to develop physics-informed neural operators. Neural operators can be used as surrogates in design problems, uncertainty quantification, autonomous systems, and almost in any application requiring real-time inference. Moreover, independently pre-trained DeepONets can be used as components of a complex multi-physics system by coupling them together with relatively light training. Here, we present a review of DeepONet, the Fourier neural operator, and the graph neural operator, as well as appropriate extensions with feature expansions, and highlight their usefulness in diverse applications in computational mechanics, including porous media, fluid mechanics, and solid mechanics.Comment: 33 pages, 14 figures. arXiv admin note: text overlap with arXiv:2204.00997 by other author

    Deep neural operators can predict the real-time response of floating offshore structures under irregular waves

    Full text link
    The use of neural operators in a digital twin model of an offshore floating structure can provide a paradigm shift in structural response prediction and health monitoring, providing valuable information for real-time control. In this work, the performance of three neural operators is evaluated, namely, deep operator network (DeepONet), Fourier neural operator (FNO), and Wavelet neural operator (WNO). We investigate the effectiveness of the operators to accurately capture the responses of a floating structure under six different sea state codes (3−8)(3-8) based on the wave characteristics described by the World Meteorological Organization (WMO). The results demonstrate that these high-precision neural operators can deliver structural responses more efficiently, up to two orders of magnitude faster than a dynamic analysis using conventional numerical solvers. Additionally, compared to gated recurrent units (GRUs), a commonly used recurrent neural network for time-series estimation, neural operators are both more accurate and efficient, especially in situations with limited data availability. To further enhance the accuracy, novel extensions, such as wavelet-DeepONet and self-adaptive WNO, are proposed. Taken together, our study shows that FNO outperforms all other operators for approximating the mapping of one input functional space to the output space as well as for responses that have small bandwidth of the frequency spectrum, whereas for learning the mapping of multiple functions in the input space to the output space as well as for capturing responses within a large frequency spectrum, DeepONet with historical states provides the highest accuracy.Comment: 27 pages, 24 figures, 11 table

    Neural Operator Learning for Long-Time Integration in Dynamical Systems with Recurrent Neural Networks

    Full text link
    Deep neural networks are an attractive alternative for simulating complex dynamical systems, as in comparison to traditional scientific computing methods, they offer reduced computational costs during inference and can be trained directly from observational data. Existing methods, however, cannot extrapolate accurately and are prone to error accumulation in long-time integration. Herein, we address this issue by combining neural operators with recurrent neural networks to construct a novel and effective architecture, resulting in superior accuracy compared to the state-of-the-art. The new hybrid model is based on operator learning while offering a recurrent structure to capture temporal dependencies. The integrated framework is shown to stabilize the solution and reduce error accumulation for both interpolation and extrapolation of the Korteweg-de Vries equation.Comment: 12 pages, 5 figure

    Deep transfer learning for partial differential equations under conditional shift with DeepONet

    Full text link
    Traditional machine learning algorithms are designed to learn in isolation, i.e. address single tasks. The core idea of transfer learning (TL) is that knowledge gained in learning to perform one task (source) can be leveraged to improve learning performance in a related, but different, task (target). TL leverages and transfers previously acquired knowledge to address the expense of data acquisition and labeling, potential computational power limitations, and the dataset distribution mismatches. Although significant progress has been made in the fields of image processing, speech recognition, and natural language processing (for classification and regression) for TL, little work has been done in the field of scientific machine learning for functional regression and uncertainty quantification in partial differential equations. In this work, we propose a novel TL framework for task-specific learning under conditional shift with a deep operator network (DeepONet). Inspired by the conditional embedding operator theory, we measure the statistical distance between the source domain and the target feature domain by embedding conditional distributions onto a reproducing kernel Hilbert space. Task-specific operator learning is accomplished by fine-tuning task-specific layers of the target DeepONet using a hybrid loss function that allows for the matching of individual target samples while also preserving the global properties of the conditional distribution of target data. We demonstrate the advantages of our approach for various TL scenarios involving nonlinear PDEs under conditional shift. Our results include geometry domain adaptation and show that the proposed TL framework enables fast and efficient multi-task operator learning, despite significant differences between the source and target domains.Comment: 19 pages, 3 figure

    Developing a cost-effective emulator for groundwater flow modeling using deep neural operators

    Full text link
    Current groundwater models face a significant challenge in their implementation due to heavy computational burdens. To overcome this, our work proposes a cost-effective emulator that efficiently and accurately forecasts the impact of abstraction in an aquifer. Our approach uses a deep neural operator (DeepONet) to learn operators that map between infinite-dimensional function spaces via deep neural networks. The goal is to infer the distribution of hydraulic head in a confined aquifer in the presence of a pumping well. We successfully tested the DeepONet on four problems, including two forward problems, an inverse analysis, and a nonlinear system. Additionally, we propose a novel extension of the DeepONet-based architecture to generate accurate predictions for varied hydraulic conductivity fields and pumping well locations that are unseen during training. Our emulator's predictions match the target data with excellent performance, demonstrating that the proposed model can act as an efficient and fast tool to support a range of tasks that require repetitive forward numerical simulations or inverse simulations of groundwater flow problems. Overall, our work provides a promising avenue for developing cost-effective and accurate groundwater models

    Neural operator learning of heterogeneous mechanobiological insults contributing to aortic aneurysms

    Full text link
    [EN] Thoracic aortic aneurysm (TAA) is a localized dilatation of the aorta that can lead to life-threatening dissection or rupture. In vivo assessments of TAA progression are largely limited to measurements of aneurysm size and growth rate. There is promise, however, that computational modelling of the evolving biomechanics of the aorta could predict future geometry and properties from initiating mechanobiological insults. We present an integrated framework to train a deep operator network (DeepONet)-based surrogate model to identify TAA contributing factors using synthetic finite-element-based datasets. For training, we employ a constrained mixture model of aortic growth and remodelling to generate maps of local aortic dilatation and distensibility for multiple TAA risk factors. We evaluate the performance of the surrogate model for insult distributions varying from fusiform (analytically defined) to complex (randomly generated). We propose two frameworks, one trained on sparse information and one on full-field greyscale images, to gain insight into a preferred neural operator-based approach. We show that this continuous learning approach can predict the patient-specific insult profile associated with any given dilatation and distensibility map with high accuracy, particularly when based on full-field images. Our findings demonstrate the feasibility of applying DeepONet to support transfer learning of patient-specific inputs to predict TAA progression.This work was supported by the National Institutes of Health (grant nos. P01 HL134605 and U01 HL142518)Goswami, S.; Li, DS.; Rego, BV.; Latorre, M.; Humphrey, JD.; Karniadakis, GE. (2022). Neural operator learning of heterogeneous mechanobiological insults contributing to aortic aneurysms. Journal of The Royal Society Interface. 19(193):1-16. https://doi.org/10.1098/rsif.2022.04101161919
    corecore