258 research outputs found

    Interior Point Methods and Preconditioning for PDE-Constrained Optimization Problems Involving Sparsity Terms

    Get PDF
    PDE-constrained optimization problems with control or state constraints are challenging from an analytical as well as numerical perspective. The combination of these constraints with a sparsity-promoting L1\rm L^1 term within the objective function requires sophisticated optimization methods. We propose the use of an Interior Point scheme applied to a smoothed reformulation of the discretized problem, and illustrate that such a scheme exhibits robust performance with respect to parameter changes. To increase the potency of this method we introduce fast and efficient preconditioners which enable us to solve problems from a number of PDE applications in low iteration numbers and CPU times, even when the parameters involved are altered dramatically

    Preconditioning of Active-Set Newton Methods for PDE-constrained Optimal Control Problems

    Full text link
    We address the problem of preconditioning a sequence of saddle point linear systems arising in the solution of PDE-constrained optimal control problems via active-set Newton methods, with control and (regularized) state constraints. We present two new preconditioners based on a full block matrix factorization of the Schur complement of the Jacobian matrices, where the active-set blocks are merged into the constraint blocks. We discuss the robustness of the new preconditioners with respect to the parameters of the continuous and discrete problems. Numerical experiments on 3D problems are presented, including comparisons with existing approaches based on preconditioned conjugate gradients in a nonstandard inner product

    A multigrid method for PDE-constrained optimization with uncertain inputs

    Full text link
    We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number NN of samples used to discretized the probability space. We show that this reduced system can be solved with optimal O(N)O(N) complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and L1L^1-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.Comment: 24, 2 figure

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    General-purpose preconditioning for regularized interior point methods

    Get PDF
    In this paper we present general-purpose preconditioners for regularized augmented systems, and their corresponding normal equations, arising from optimization problems. We discuss positive definite preconditioners, suitable for CG and MINRES. We consider “sparsifications" which avoid situations in which eigenvalues of the preconditioned matrix may become complex. Special attention is given to systems arising from the application of regularized interior point methods to linear or nonlinear convex programming problems.</p
    corecore