7 research outputs found
Operator preconditioning for a class of inequality constrained optimal control problems
Abstract We propose and analyze two strategies for preconditioning linear operator equations that arise in PDE constrained optimal control in the framework of conjugate gradient methods. Our particular focus is on control or state constrained problems, where we consider the question of robustness with respect to critical parameters. We construct a preconditioner that yields favorable robustness properties with respect to critical parameters
Accelerated primal-dual methods with enlarged step sizes and operator learning for nonsmooth optimal control problems
We consider a general class of nonsmooth optimal control problems with
partial differential equation (PDE) constraints, which are very challenging due
to its nonsmooth objective functionals and the resulting high-dimensional and
ill-conditioned systems after discretization. We focus on the application of a
primal-dual method, with which different types of variables can be treated
individually and thus its main computation at each iteration only requires
solving two PDEs. Our target is to accelerate the primal-dual method with
either larger step sizes or operator learning techniques. For the accelerated
primal-dual method with larger step sizes, its convergence can be still proved
rigorously while it numerically accelerates the original primal-dual method in
a simple and universal way. For the operator learning acceleration, we
construct deep neural network surrogate models for the involved PDEs. Once a
neural operator is learned, solving a PDE requires only a forward pass of the
neural network, and the computational cost is thus substantially reduced. The
accelerated primal-dual method with operator learning is mesh-free, numerically
efficient, and scalable to different types of PDEs. The acceleration
effectiveness of these two techniques is promisingly validated by some
preliminary numerical results