17,279 research outputs found
Bi-level Physics-Informed Neural Networks for PDE Constrained Optimization using Broyden's Hypergradients
Deep learning based approaches like Physics-informed neural networks (PINNs)
and DeepONets have shown promise on solving PDE constrained optimization
(PDECO) problems. However, existing methods are insufficient to handle those
PDE constraints that have a complicated or nonlinear dependency on optimization
targets. In this paper, we present a novel bi-level optimization framework to
resolve the challenge by decoupling the optimization of the targets and
constraints. For the inner loop optimization, we adopt PINNs to solve the PDE
constraints only. For the outer loop, we design a novel method by using
Broyden's method based on the Implicit Function Theorem (IFT), which is
efficient and accurate for approximating hypergradients. We further present
theoretical explanations and error analysis of the hypergradients computation.
Extensive experiments on multiple large-scale and nonlinear PDE constrained
optimization problems demonstrate that our method achieves state-of-the-art
results compared with strong baselines
Learning differentiable solvers for systems with hard constraints
We introduce a practical method to enforce partial differential equation
(PDE) constraints for functions defined by neural networks (NNs), with a high
degree of accuracy and up to a desired tolerance. We develop a differentiable
PDE-constrained layer that can be incorporated into any NN architecture. Our
method leverages differentiable optimization and the implicit function theorem
to effectively enforce physical constraints. Inspired by dictionary learning,
our model learns a family of functions, each of which defines a mapping from
PDE parameters to PDE solutions. At inference time, the model finds an optimal
linear combination of the functions in the learned family by solving a
PDE-constrained optimization problem. Our method provides continuous solutions
over the domain of interest that accurately satisfy desired physical
constraints. Our results show that incorporating hard constraints directly into
the NN architecture achieves much lower test error when compared to training on
an unconstrained objective.Comment: Paper accepted to the 11th International Conference on Learning
Representations (ICLR 2023). 9 pages + references + appendix. 5 figures in
main tex
A Stochastic Gradient Method with Mesh Refinement for PDE Constrained Optimization under Uncertainty
Models incorporating uncertain inputs, such as random forces or material
parameters, have been of increasing interest in PDE-constrained optimization.
In this paper, we focus on the efficient numerical minimization of a convex and
smooth tracking-type functional subject to a linear partial differential
equation with random coefficients and box constraints. The approach we take is
based on stochastic approximation where, in place of a true gradient, a
stochastic gradient is chosen using one sample from a known probability
distribution. Feasibility is maintained by performing a projection at each
iteration. In the application of this method to PDE-constrained optimization
under uncertainty, new challenges arise. We observe the discretization error
made by approximating the stochastic gradient using finite elements. Analyzing
the interplay between PDE discretization and stochastic error, we develop a
mesh refinement strategy coupled with decreasing step sizes. Additionally, we
develop a mesh refinement strategy for the modified algorithm using iterate
averaging and larger step sizes. The effectiveness of the approach is
demonstrated numerically for different random field choices
The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach
We study the combination of the alternating direction method of multipliers
(ADMM) with physics-informed neural networks (PINNs) for a general class of
nonsmooth partial differential equation (PDE)-constrained optimization
problems, where additional regularization can be employed for constraints on
the control or design variables. The resulting ADMM-PINNs algorithmic framework
substantially enlarges the applicable range of PINNs to nonsmooth cases of
PDE-constrained optimization problems. The application of the ADMM makes it
possible to untie the PDE constraints and the nonsmooth regularization terms
for iterations. Accordingly, at each iteration, one of the resulting
subproblems is a smooth PDE-constrained optimization which can be efficiently
solved by PINNs, and the other is a simple nonsmooth optimization problem which
usually has a closed-form solution or can be efficiently solved by various
standard optimization algorithms or pre-trained neural networks. The ADMM-PINNs
algorithmic framework does not require to solve PDEs repeatedly, and it is
mesh-free, easy to implement, and scalable to different PDE settings. We
validate the efficiency of the ADMM-PINNs algorithmic framework by different
prototype applications, including inverse potential problems, source
identification in elliptic equations, control constrained optimal control of
the Burgers equation, and sparse optimal control of parabolic equations
Physics-informed neural networks with hard constraints for inverse design
Inverse design arises in a variety of areas in engineering such as acoustic,
mechanics, thermal/electronic transport, electromagnetism, and optics. Topology
optimization is a major form of inverse design, where we optimize a designed
geometry to achieve targeted properties and the geometry is parameterized by a
density function. This optimization is challenging, because it has a very high
dimensionality and is usually constrained by partial differential equations
(PDEs) and additional inequalities. Here, we propose a new deep learning method
-- physics-informed neural networks with hard constraints (hPINNs) -- for
solving topology optimization. hPINN leverages the recent development of PINNs
for solving PDEs, and thus does not rely on any numerical PDE solver. However,
all the constraints in PINNs are soft constraints, and hence we impose hard
constraints by using the penalty method and the augmented Lagrangian method. We
demonstrate the effectiveness of hPINN for a holography problem in optics and a
fluid problem of Stokes flow. We achieve the same objective as conventional
PDE-constrained optimization methods based on adjoint methods and numerical PDE
solvers, but find that the design obtained from hPINN is often simpler and
smoother for problems whose solution is not unique. Moreover, the
implementation of inverse design with hPINN can be easier than that of
conventional methods
Interior-point methods for PDE-constrained optimization
In applied sciences PDEs model an extensive variety of phenomena. Typically the final goal of simulations is a system which is optimal in a certain sense. For instance optimal control problems identify a control to steer a system towards a desired state. Inverse problems seek PDE parameters which are most consistent with measurements. In these optimization problems PDEs appear as equality constraints. PDE-constrained optimization problems are large-scale and often nonconvex. Their numerical solution leads to large ill-conditioned linear systems. In many practical problems inequality constraints implement technical limitations or prior knowledge.
In this thesis interior-point (IP) methods are considered to solve nonconvex large-scale PDE-constrained optimization problems with inequality constraints. To cope with enormous fill-in of direct linear solvers, inexact search directions are allowed in an inexact interior-point (IIP) method. This thesis builds upon the IIP method proposed in [Curtis, Schenk, Wächter, SIAM Journal on Scientific Computing, 2010]. SMART tests cope with the lack of inertia information to control Hessian modification and also specify termination tests for the iterative linear solver.
The original IIP method needs to solve two sparse large-scale linear systems in each optimization step. This is improved to only a single linear system solution in most optimization steps. Within this improved IIP framework, two iterative linear solvers are evaluated: A general purpose algebraic multilevel incomplete L D L^T preconditioned SQMR method is applied to PDE-constrained optimization problems for optimal server room cooling in three space dimensions and to compute an ambient temperature for optimal cooling. The results show robustness and efficiency of the IIP method when compared with the exact IP method.
These advantages are even more evident for a reduced-space preconditioned (RSP) GMRES solver which takes advantage of the linear system's structure. This RSP-IIP method is studied on the basis of distributed and boundary control problems originating from superconductivity and from two-dimensional and three-dimensional parameter estimation problems in groundwater modeling. The numerical results exhibit the improved efficiency especially for multiple PDE constraints.
An inverse medium problem for the Helmholtz equation with pointwise box constraints is solved by IP methods. The ill-posedness of the problem is explored numerically and different regularization strategies are compared. The impact of box constraints and the importance of Hessian modification on the optimization algorithm is demonstrated. A real world seismic imaging problem is solved successfully by the RSP-IIP method
- …