Many inverse and parameter estimation problems can be written as
PDE-constrained optimization problems. The goal, then, is to infer the
parameters, typically coefficients of the PDE, from partial measurements of the
solutions of the PDE for several right-hand-sides. Such PDE-constrained
problems can be solved by finding a stationary point of the Lagrangian, which
entails simultaneously updating the paramaters and the (adjoint) state
variables. For large-scale problems, such an all-at-once approach is not
feasible as it requires storing all the state variables. In this case one
usually resorts to a reduced approach where the constraints are explicitly
eliminated (at each iteration) by solving the PDEs. These two approaches, and
variations thereof, are the main workhorses for solving PDE-constrained
optimization problems arising from inverse problems. In this paper, we present
an alternative method that aims to combine the advantages of both approaches.
Our method is based on a quadratic penalty formulation of the constrained
optimization problem. By eliminating the state variable, we develop an
efficient algorithm that has roughly the same computational complexity as the
conventional reduced approach while exploiting a larger search space. Numerical
results show that this method indeed reduces some of the non-linearity of the
problem and is less sensitive the initial iterate