22 research outputs found

    Preconditioning iterative methods for the optimal control of the Stokes equation

    Get PDF
    Solving problems regarding the optimal control of partial differential equations (PDEs) – also known as PDE-constrained optimization – is a frontier area of numerical analysis. Of particular interest is the problem of flow control, where one would like to effect some desired flow by exerting, for example, an external force. The bottleneck in many current algorithms is the solution of the optimality system – a system of equations in saddle point form that is usually very large and ill-conditioned. In this paper we describe two preconditioners – a block-diagonal preconditioner for the minimal residual method and a block-lower triangular preconditioner for a non-standard conjugate gradient method – which can be effective when applied to such problems where the PDEs are the Stokes equations. We consider only distributed control here, although other problems – for example boundary control – could be treated in the same way. We give numerical results, and compare these with those obtained by solving the equivalent forward problem using similar technique

    Chebyshev semi-iteration in Preconditioning

    Get PDF
    It is widely believed that Krylov subspace iterative methods are better than Chebyshev semi-iterative methods. When the solution of a linear system with a symmetric and positive definite coefficient matrix is required then the Conjugate Gradient method will compute the optimal approximate solution from the appropriate Krylov subspace, that is, it will implicitly compute the optimal polynomial. Hence a semi-iterative method, which requires eigenvalue bounds and computes an explicit polynomial, must, for just a little less computational work, give an inferior result. In this manuscript we identify a specific situation in the context of preconditioning when the Chebyshev semi-iterative method is the method of choice since it has properties which make it superior to the Conjugate Gradient method

    Optimal solvers for PDE-Constrained Optimization

    Get PDF
    Optimization problems with constraints which require the solution of a partial differential equation arise widely in many areas of the sciences and engineering, in particular in problems of design. The solution of such PDE-constrained optimization problems is usually a major computational task. Here we consider simple problems of this type: distributed control problems in which the 2- and 3-dimensional Poisson problem is the PDE. The large dimensional linear systems which result from discretization and which need to be solved are of saddle-point type. We introduce two optimal preconditioners for these systems which lead to convergence of symmetric Krylov subspace iterative methods in a number of iterations which does not increase with the dimension of the discrete problem. These preconditioners are block structured and involve standard multigrid cycles. The optimality of the preconditioned iterative solver is proved theoretically and verified computationally in several test cases. The theoretical proof indicates that these approaches may have much broader applicability for other partial differential equations

    Null-space preconditioners for saddle point systems

    Get PDF
    The null-space method is a technique that has been used for many years to reduce a saddle point system to a smaller, easier to solve, symmetric positive-definite system. This method can be understood as a block factorization of the system. Here we explore the use of preconditioners based on incomplete versions of a particular null-space factorization, and compare their performance with the equivalent Schur-complement based preconditioners. We also describe how to apply the non-symmetric preconditioners proposed using the conjugate gradient method (CG) with a non-standard inner product. This requires an exact solve with the (1,1) block, and the resulting algorithm is applicable in other cases where Bramble-Pasciak CG is used. We verify the efficiency of the newly proposed preconditioners on a number of test cases from a range of applications

    Two-level Nystrom-Schur preconditioner for sparse symmetric positive definite matrices

    Get PDF
    Randomized methods are becoming increasingly popular in numerical linear algebra. However, few attempts have been made to use them in developing preconditioners. Our interest lies in solving large-scale sparse symmetric positive definite linear systems of equations where the system matrix is preordered to doubly bordered block diagonal form (for example, using a nested dissection ordering). We investigate the use of randomized methods to construct high quality preconditioners. In particular, we propose a new and efficient approach that employs Nystrom's method for computing low rank approximations to develop robust algebraic two-level preconditioners. Construction of the new preconditioners involves iteratively solving a smaller but denser symmetric positive definite Schur complement system with multiple right-hand sides. Numerical experiments on problems coming from a range of application areas demonstrate that this inner system can be solved cheaply using block conjugate gradients and that using a large convergence tolerance to limit the cost does not adversely affect the quality of the resulting Nystrm-Schur two-level preconditioner

    Block triangular preconditioners for PDE constrained optimization

    No full text
    In this paper we investigate the possibility of using a block triangular preconditioner for saddle point problems arising in PDE constrained optimization. In particular we focus on a conjugate gradient-type method introduced by Bramble and Pasciak which uses self adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method – the appropriate scaling of the preconditioners – is easily overcome. We present an eigenvalue analysis for the block triangular preconditioners which gives convergence bounds in the non-standard inner product and illustrate their competitiveness on a number of computed examples

    Null-space preconditioners for saddle point problems

    No full text
    The null-space method is a technique that has been used for many years to reduce a saddle point system to a smaller, easier to solve, symmetric positive-definite system. This method can be understood as a block factorization of the system. Here we explore the use of preconditioners based on incomplete versions of a particular null-space factorization, and compare their performance with the equivalent Schur-complement based preconditioners. We also describe how to apply the non-symmetric preconditioners proposed using the conjugate gradient method (CG) with a non-standard inner product. This requires an exact solve with the (1,1) block, and the resulting algorithm is applicable in other cases where Bramble-Pasciak CG is used. We verify the efficiency of the newly proposed preconditioners on a number of test cases from a range of applications
    corecore