744 research outputs found

    Preconditioners for state constrained optimal control problems with Moreau-Yosida penalty function

    Get PDF
    Optimal control problems with partial differential equations as constraints play an important role in many applications. The inclusion of bound constraints for the state variable poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau-Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared to other approaches. In this paper we develop robust preconditioners for the efficient solution of the Newton steps associated with solving the Moreau-Yosida regularized problem. Numerical results illustrate the efficiency of our approach

    Preconditioners for state constrained optimal control problems\ud with Moreau-Yosida penalty function tube

    Get PDF
    Optimal control problems with partial differential equations play an important role in many applications. The inclusion of bound constraints for the state poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau-Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared to other approaches. In this paper we develop preconditioners for the efficient solution of the Newton steps associated with the fast solution of the Moreau-Yosida regularized problem. Numerical results illustrate the competitiveness of this approach. \ud \ud Copyright c 2000 John Wiley & Sons, Ltd

    Robust preconditioners for PDE-constrained optimization with limited observations

    Get PDF
    Regularization robust preconditioners for PDE-constrained optimization problems have been successfully developed. These methods, however, typically assume that observation data is available throughout the entire domain of the state equation. For many inverse problems, this is an unrealistic assumption. In this paper we propose and analyze preconditioners for PDE-constrained optimization problems with limited observation data, e.g. observations are only available at the boundary of the solution domain. Our methods are robust with respect to both the regularization parameter and the mesh size. That is, the condition number of the preconditioned optimality system is uniformly bounded, independently of the size of these two parameters. We first consider a prototypical elliptic control problem and thereafter more general PDE-constrained optimization problems. Our theoretical findings are illuminated by several numerical results

    A New Approximation of the Schur Complement in Preconditioners for PDE Constrained Optimization

    Get PDF
    Saddle point systems arise widely in optimization problems with constraints. The utility of Schur complement approximation is now broadly appreciated in the context of solving such saddle point systems by iteration. In this short manuscript, we present a new Schur complement approximation for PDE constrained optimization, an important class of these problems. Block diagonal and block triangular preconditioners have previously been designed to be used to solve such problems along with MINRES and non-standard Conjugate Gradients respectively; with appropriate approximation blocks these can be optimal in the sense that the time required for solution scales linearly with the problem size, however small the mesh size we use. In this paper, we extend this work to designing such preconditioners for which this optimality property holds independently of both the mesh size and of the Tikhonov regularization parameter \beta that is used. This also leads to an effective symmetric indefinite preconditioner that exhibits mesh and \beta-independence. We motivate the choice of these preconditioners based on observations about approximating the Schur complement obtained from the matrix system, derive eigenvalue bounds which verify the effectiveness of the approximation, and present numerical results which show that these new preconditioners work well in practice

    Optimal solvers for PDE-Constrained Optimization

    Get PDF
    Optimization problems with constraints which require the solution of a partial differential equation arise widely in many areas of the sciences and engineering, in particular in problems of design. The solution of such PDE-constrained optimization problems is usually a major computational task. Here we consider simple problems of this type: distributed control problems in which the 2- and 3-dimensional Poisson problem is the PDE. The large dimensional linear systems which result from discretization and which need to be solved are of saddle-point type. We introduce two optimal preconditioners for these systems which lead to convergence of symmetric Krylov subspace iterative methods in a number of iterations which does not increase with the dimension of the discrete problem. These preconditioners are block structured and involve standard multigrid cycles. The optimality of the preconditioned iterative solver is proved theoretically and verified computationally in several test cases. The theoretical proof indicates that these approaches may have much broader applicability for other partial differential equations

    Preconditioning of Active-Set Newton Methods for PDE-constrained Optimal Control Problems

    Full text link
    We address the problem of preconditioning a sequence of saddle point linear systems arising in the solution of PDE-constrained optimal control problems via active-set Newton methods, with control and (regularized) state constraints. We present two new preconditioners based on a full block matrix factorization of the Schur complement of the Jacobian matrices, where the active-set blocks are merged into the constraint blocks. We discuss the robustness of the new preconditioners with respect to the parameters of the continuous and discrete problems. Numerical experiments on 3D problems are presented, including comparisons with existing approaches based on preconditioned conjugate gradients in a nonstandard inner product
    corecore