30 research outputs found
Stiff-PINN: Physics-Informed Neural Network for Stiff Chemical Kinetics
Recently developed physics-informed neural network (PINN) has achieved
success in many science and engineering disciplines by encoding physics laws
into the loss functions of the neural network, such that the network not only
conforms to the measurements, initial and boundary conditions but also
satisfies the governing equations. This work first investigates the performance
of PINN in solving stiff chemical kinetic problems with governing equations of
stiff ordinary differential equations (ODEs). The results elucidate the
challenges of utilizing PINN in stiff ODE systems. Consequently, we employ
Quasi-Steady-State-Assumptions (QSSA) to reduce the stiffness of the ODE
systems, and the PINN then can be successfully applied to the converted
non/mild-stiff systems. Therefore, the results suggest that stiffness could be
the major reason for the failure of the regular PINN in the studied stiff
chemical kinetic systems. The developed Stiff-PINN approach that utilizes QSSA
to enable PINN to solve stiff chemical kinetics shall open the possibility of
applying PINN to various reaction-diffusion systems involving stiff dynamics
Transfer learning-based physics-informed convolutional neural network for simulating flow in porous media with time-varying controls
A physics-informed convolutional neural network is proposed to simulate two
phase flow in porous media with time-varying well controls. While most of
PICNNs in existing literatures worked on parameter-to-state mapping, our
proposed network parameterizes the solution with time-varying controls to
establish a control-to-state regression. Firstly, finite volume scheme is
adopted to discretize flow equations and formulate loss function that respects
mass conservation laws. Neumann boundary conditions are seamlessly incorporated
into the semi-discretized equations so no additional loss term is needed. The
network architecture comprises two parallel U-Net structures, with network
inputs being well controls and outputs being the system states. To capture the
time-dependent relationship between inputs and outputs, the network is well
designed to mimic discretized state space equations. We train the network
progressively for every timestep, enabling it to simultaneously predict oil
pressure and water saturation at each timestep. After training the network for
one timestep, we leverage transfer learning techniques to expedite the training
process for subsequent timestep. The proposed model is used to simulate
oil-water porous flow scenarios with varying reservoir gridblocks and aspects
including computation efficiency and accuracy are compared against
corresponding numerical approaches. The results underscore the potential of
PICNN in effectively simulating systems with numerous grid blocks, as
computation time does not scale with model dimensionality. We assess the
temporal error using 10 different testing controls with variation in magnitude
and another 10 with higher alternation frequency with proposed control-to-state
architecture. Our observations suggest the need for a more robust and reliable
model when dealing with controls that exhibit significant variations in
magnitude or frequency
Predicting nonlinear dynamics of optical solitons in optical fiber via the SCPINN
The strongly-constrained physics-informed neural network (SCPINN) is proposed
by adding the information of compound derivative embedded into the
soft-constraint of physics-informed neural network(PINN). It is used to predict
nonlinear dynamics and the formation process of bright and dark picosecond
optical solitons, and femtosecond soliton molecule in the single-mode fiber,
and reveal the variation of physical quantities including the energy,
amplitude, spectrum and phase of pulses during the soliton transmission. The
adaptive weight is introduced to accelerate the convergence of loss function in
this new neural network. Compared with the PINN, the accuracy of SCPINN in
predicting soliton dynamics is improved by 5-11 times. Therefore, the SCPINN is
a forward-looking method to study the modeling and analysis of soliton dynamics
in the fiber
A nonlocal physics-informed deep learning framework using the peridynamic differential operator
The Physics-Informed Neural Network (PINN) framework introduced recently
incorporates physics into deep learning, and offers a promising avenue for the
solution of partial differential equations (PDEs) as well as identification of
the equation parameters. The performance of existing PINN approaches, however,
may degrade in the presence of sharp gradients, as a result of the inability of
the network to capture the solution behavior globally. We posit that this
shortcoming may be remedied by introducing long-range (nonlocal) interactions
into the network's input, in addition to the short-range (local) space and time
variables. Following this ansatz, here we develop a nonlocal PINN approach
using the Peridynamic Differential Operator (PDDO)---a numerical method which
incorporates long-range interactions and removes spatial derivatives in the
governing equations. Because the PDDO functions can be readily incorporated in
the neural network architecture, the nonlocality does not degrade the
performance of modern deep-learning algorithms. We apply nonlocal PDDO-PINN to
the solution and identification of material parameters in solid mechanics and,
specifically, to elastoplastic deformation in a domain subjected to indentation
by a rigid punch, for which the mixed displacement--traction boundary condition
leads to localized deformation and sharp gradients in the solution. We document
the superior behavior of nonlocal PINN with respect to local PINN in both
solution accuracy and parameter inference, illustrating its potential for
simulation and discovery of partial differential equations whose solution
develops sharp gradients
Robust Learning of Physics Informed Neural Networks
Physics-informed Neural Networks (PINNs) have been shown to be effective in
solving partial differential equations by capturing the physics induced
constraints as a part of the training loss function. This paper shows that a
PINN can be sensitive to errors in training data and overfit itself in
dynamically propagating these errors over the domain of the solution of the
PDE. It also shows how physical regularizations based on continuity criteria
and conservation laws fail to address this issue and rather introduce problems
of their own causing the deep network to converge to a physics-obeying local
minimum instead of the global minimum. We introduce Gaussian Process (GP) based
smoothing that recovers the performance of a PINN and promises a robust
architecture against noise/errors in measurements. Additionally, we illustrate
an inexpensive method of quantifying the evolution of uncertainty based on the
variance estimation of GPs on boundary data. Robust PINN performance is also
shown to be achievable by choice of sparse sets of inducing points based on
sparsely induced GPs. We demonstrate the performance of our proposed methods
and compare the results from existing benchmark models in literature for
time-dependent Schr\"odinger and Burgers' equations
Physics-Informed Polynomial Chaos Expansions
Surrogate modeling of costly mathematical models representing physical
systems is challenging since it is typically not possible to create a large
experimental design. Thus, it is beneficial to constrain the approximation to
adhere to the known physics of the model. This paper presents a novel
methodology for the construction of physics-informed polynomial chaos
expansions (PCE) that combines the conventional experimental design with
additional constraints from the physics of the model. Physical constraints
investigated in this paper are represented by a set of differential equations
and specified boundary conditions. A computationally efficient means for
construction of physically constrained PCE is proposed and compared to standard
sparse PCE. It is shown that the proposed algorithms lead to superior accuracy
of the approximation and does not add significant computational burden.
Although the main purpose of the proposed method lies in combining data and
physical constraints, we show that physically constrained PCEs can be
constructed from differential equations and boundary conditions alone without
requiring evaluations of the original model. We further show that the
constrained PCEs can be easily applied for uncertainty quantification through
analytical post-processing of a reduced PCE filtering out the influence of all
deterministic space-time variables. Several deterministic examples of
increasing complexity are provided and the proposed method is applied for
uncertainty quantification
Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
Multifidelity simulation methodologies are often used in an attempt to
judiciously combine low-fidelity and high-fidelity simulation results in an
accuracy-increasing, cost-saving way. Candidates for this approach are
simulation methodologies for which there are fidelity differences connected
with significant computational cost differences. Physics-informed Neural
Networks (PINNs) are candidates for these types of approaches due to the
significant difference in training times required when different fidelities
(expressed in terms of architecture width and depth as well as optimization
criteria) are employed. In this paper, we propose a particular multifidelity
approach applied to PINNs that exploits low-rank structure. We demonstrate that
width, depth, and optimization criteria can be used as parameters related to
model fidelity, and show numerical justification of cost differences in
training due to fidelity parameter choices. We test our multifidelity scheme on
various canonical forward PDE models that have been presented in the emerging
PINNs literature
Adversarial Training for Physics-Informed Neural Networks
Physics-informed neural networks have shown great promise in solving partial
differential equations. However, due to insufficient robustness, vanilla PINNs
often face challenges when solving complex PDEs, especially those involving
multi-scale behaviors or solutions with sharp or oscillatory characteristics.
To address these issues, based on the projected gradient descent adversarial
attack, we proposed an adversarial training strategy for PINNs termed by
AT-PINNs. AT-PINNs enhance the robustness of PINNs by fine-tuning the model
with adversarial samples, which can accurately identify model failure locations
and drive the model to focus on those regions during training. AT-PINNs can
also perform inference with temporal causality by selecting the initial
collocation points around temporal initial values. We implement AT-PINNs to the
elliptic equation with multi-scale coefficients, Poisson equation with
multi-peak solutions, Burgers equation with sharp solutions and the Allen-Cahn
equation. The results demonstrate that AT-PINNs can effectively locate and
reduce failure regions. Moreover, AT-PINNs are suitable for solving complex
PDEs, since locating failure regions through adversarial attacks is independent
of the size of failure regions or the complexity of the distribution