3,051 research outputs found
A Lagrangian Dual-based Theory-guided Deep Neural Network
The theory-guided neural network (TgNN) is a kind of method which improves
the effectiveness and efficiency of neural network architectures by
incorporating scientific knowledge or physical information. Despite its great
success, the theory-guided (deep) neural network possesses certain limits when
maintaining a tradeoff between training data and domain knowledge during the
training process. In this paper, the Lagrangian dual-based TgNN (TgNN-LD) is
proposed to improve the effectiveness of TgNN. We convert the original loss
function into a constrained form with fewer items, in which partial
differential equations (PDEs), engineering controls (ECs), and expert knowledge
(EK) are regarded as constraints, with one Lagrangian variable per constraint.
These Lagrangian variables are incorporated to achieve an equitable tradeoff
between observation data and corresponding constraints, in order to improve
prediction accuracy, and conserve time and computational resources adjusted by
an ad-hoc procedure. To investigate the performance of the proposed method, the
original TgNN model with a set of optimized weight values adjusted by ad-hoc
procedures is compared on a subsurface flow problem, with their L2 error, R
square (R2), and computational time being analyzed. Experimental results
demonstrate the superiority of the Lagrangian dual-based TgNN.Comment: 12 pages, 10 figure
Transfer learning-based physics-informed convolutional neural network for simulating flow in porous media with time-varying controls
A physics-informed convolutional neural network is proposed to simulate two
phase flow in porous media with time-varying well controls. While most of
PICNNs in existing literatures worked on parameter-to-state mapping, our
proposed network parameterizes the solution with time-varying controls to
establish a control-to-state regression. Firstly, finite volume scheme is
adopted to discretize flow equations and formulate loss function that respects
mass conservation laws. Neumann boundary conditions are seamlessly incorporated
into the semi-discretized equations so no additional loss term is needed. The
network architecture comprises two parallel U-Net structures, with network
inputs being well controls and outputs being the system states. To capture the
time-dependent relationship between inputs and outputs, the network is well
designed to mimic discretized state space equations. We train the network
progressively for every timestep, enabling it to simultaneously predict oil
pressure and water saturation at each timestep. After training the network for
one timestep, we leverage transfer learning techniques to expedite the training
process for subsequent timestep. The proposed model is used to simulate
oil-water porous flow scenarios with varying reservoir gridblocks and aspects
including computation efficiency and accuracy are compared against
corresponding numerical approaches. The results underscore the potential of
PICNN in effectively simulating systems with numerous grid blocks, as
computation time does not scale with model dimensionality. We assess the
temporal error using 10 different testing controls with variation in magnitude
and another 10 with higher alternation frequency with proposed control-to-state
architecture. Our observations suggest the need for a more robust and reliable
model when dealing with controls that exhibit significant variations in
magnitude or frequency
- …