351 research outputs found

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336

    Preconditioning iterative methods for the optimal control of the Stokes equation

    Get PDF
    Solving problems regarding the optimal control of partial differential equations (PDEs) – also known as PDE-constrained optimization – is a frontier area of numerical analysis. Of particular interest is the problem of flow control, where one would like to effect some desired flow by exerting, for example, an external force. The bottleneck in many current algorithms is the solution of the optimality system – a system of equations in saddle point form that is usually very large and ill-conditioned. In this paper we describe two preconditioners – a block-diagonal preconditioner for the minimal residual method and a block-lower triangular preconditioner for a non-standard conjugate gradient method – which can be effective when applied to such problems where the PDEs are the Stokes equations. We consider only distributed control here, although other problems – for example boundary control – could be treated in the same way. We give numerical results, and compare these with those obtained by solving the equivalent forward problem using similar technique

    Natural preconditioners for saddle point systems

    Get PDF
    The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or discrete setting, so saddle point systems arising from discretization of partial differential equation problems such as those describing electromagnetic problems or incompressible flow lead to equations with this structure as does, for example, the widely used sequential quadratic programming approach to nonlinear optimization.\ud This article concerns iterative solution methods for these problems and in particular shows how the problem formulation leads to natural preconditioners which guarantee rapid convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness -- in terms of rapidity of convergence -- is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends

    A Bramble-Pasciak-like method with applications in optimization

    Get PDF
    Saddle-point systems arise in many applications areas, in fact in any situation where an extremum principle arises with constraints. The Stokes problem describing slow viscous flow of an incompressible fluid is a classic example coming from partial differential equations and in the area of Optimization such problems are ubiquitous.\ud In this manuscript we show how new approaches for the solution of saddle-point systems arising in Optimization can be derived from the Bramble-Pasciak Conjugate Gradient approach widely used in PDEs and more recent generalizations thereof. In particular we derive a class of new solution methods based on the use of Preconditioned Conjugate Gradients in non-standard inner products and demonstrate how these can be understood through more standard machinery. We show connections to Constraint Preconditioning and give the results of numerical computations on a number of standard Optimization test examples

    All-at-once solution of time-dependent PDE-constrained optimization problems

    Get PDF
    Time-dependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. One-shot methods try to compute the solution to these problems in a single iteration that solves for all time-steps at the same time. In this paper, we look at one-shot approaches for the optimal control of time-dependent PDEs and focus on the fast solution of these problems. The use of Krylov subspace solvers together with an efficient preconditioner allows for minimal storage requirements. We solve only approximate time-evolutions for both forward and adjoint problem and compute accurate solutions of a given control problem only at convergence of the overall Krylov subspace iteration. We show that our approach can give competitive results for a variety of problem formulations

    All-at-Once Solution if Time-Dependent PDE-Constrained Optimisation Problems

    Get PDF
    Time-dependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. One-shot methods try to compute the solution to these problems in a single iteration that solves for all time-steps at the same time. In this paper, we look at one-shot approaches for the optimal control of time-dependent PDEs and focus on the fast solution of these problems. The use of Krylov subspace solvers together with an efficient preconditioner allows for minimal storage requirements. We solve only approximate time-evolutions for both forward and adjoint problem and compute accurate solutions of a given control problem only at convergence of the overall Krylov subspace iteration. We show that our approach can give competitive results for a variety of problem formulations

    Extending Constraint Preconditioners for Saddle Point Problems

    Get PDF
    The problem of finding good preconditioners for the numerical solution of a certain important class of indefinite linear systems is considered. These systems are of a block 2 by 2 saddle point structure. In "Constraint preconditioning for indefinite linear systems" SIAM J. Matrix Anal. Appl., 21 (2000), Keller, Gould and Wathen introduced the idea of using constraint preconditioners that have a specific 2 by 2 block structure for the case of the (2,2) matrix block being zero. We shall extend this idea by allowing the (2,2) block to be non-zero. Results concerning the spectrum and form of the eigenvectors are presented, as are numerical results to validate our conclusions
    • …
    corecore