443 research outputs found

    Combination preconditioning of saddle point systems for positive definiteness

    Get PDF
    Amongst recent contributions to preconditioning methods for saddle point systems, standard iterative methods in nonstandard inner products have been usefully employed. Krzyzanowski (Numer. Linear Algebra Appl. 2011; 18:123–140) identified a two-parameter family of preconditioners in this context and Stoll and Wathen (SIAM J. Matrix Anal. Appl. 2008; 30:582–608) introduced combination preconditioning, where two preconditioners, self-adjoint with respect to different inner products, can lead to further preconditioners and associated bilinear forms or inner products. Preconditioners that render the preconditioned saddle point matrix nonsymmetric but self-adjoint with respect to a nonstandard inner product always allow a MINRES-type method (W-PMINRES) to be applied in the relevant inner product. If the preconditioned matrix is also positive definite with respect to the inner product a more efficient CG-like method (W-PCG) can be reliably used. We establish eigenvalue expressions for Krzyzanowski preconditioners and show that for a specific choice of parameters, although the Krzyzanowski preconditioned saddle point matrix is self-adjoint with respect to an inner product, it is never positive definite. We provide explicit expressions for the combination of certain preconditioners and prove the rather counterintuitive result that the combination of two specific preconditioners for which only W-PMINRES can be reliably used leads to a preconditioner for which, for certain parameter choices, W-PCG is reliably applicable. That is, combining two indefinite preconditioners can lead to a positive definite preconditioner. This combination preconditioner outperforms either of the two preconditioners from which it is formed for a number of test problems

    On choice of preconditioner for minimum residual methods for nonsymmetric matrices

    Get PDF
    Existing convergence bounds for Krylov subspace methods such as GMRES for nonsymmetric linear systems give little mathematical guidance for the choice of preconditioner. Here, we establish a desirable mathematical property of a preconditioner which guarantees that convergence of a minimum residual method will essentially depend only on the eigenvalues of the preconditioned system, as is true in the symmetric case. Our theory covers only a subset of nonsymmetric coefficient matrices but computations indicate that it might be more generally applicable

    Chebyshev semi-iteration in Preconditioning

    Get PDF
    It is widely believed that Krylov subspace iterative methods are better than Chebyshev semi-iterative methods. When the solution of a linear system with a symmetric and positive definite coefficient matrix is required then the Conjugate Gradient method will compute the optimal approximate solution from the appropriate Krylov subspace, that is, it will implicitly compute the optimal polynomial. Hence a semi-iterative method, which requires eigenvalue bounds and computes an explicit polynomial, must, for just a little less computational work, give an inferior result. In this manuscript we identify a specific situation in the context of preconditioning when the Chebyshev semi-iterative method is the method of choice since it has properties which make it superior to the Conjugate Gradient method

    Approximation of the Scattering Amplitude using Nonsymmetric Saddle Point Matrices

    Get PDF
    In this thesis we look at iterative methods for solving the primal (Ax = b) and dual (AT y = g) systems of linear equations to approximate the scattering amplitude defined by gTx =yTb. We use a conjugate gradient-like iteration for a unsymmetric saddle point matrix that is contructed so as to have a real positive spectrum. We find that this method is more consistent than known methods for computing the scattering amplitude such as GLSQR or QMR. Then, we use techniques from matrices, moments, and quadrature to compute the scattering amplitude without solving the system directly

    A Bramble-Pasciak-like method with applications in optimization

    Get PDF
    Saddle-point systems arise in many applications areas, in fact in any situation where an extremum principle arises with constraints. The Stokes problem describing slow viscous flow of an incompressible fluid is a classic example coming from partial differential equations and in the area of Optimization such problems are ubiquitous.\ud In this manuscript we show how new approaches for the solution of saddle-point systems arising in Optimization can be derived from the Bramble-Pasciak Conjugate Gradient approach widely used in PDEs and more recent generalizations thereof. In particular we derive a class of new solution methods based on the use of Preconditioned Conjugate Gradients in non-standard inner products and demonstrate how these can be understood through more standard machinery. We show connections to Constraint Preconditioning and give the results of numerical computations on a number of standard Optimization test examples

    A Bramble-Pasciak conjugate gradient method for discrete Stokes equations with random viscosity

    Full text link
    We study the iterative solution of linear systems of equations arising from stochastic Galerkin finite element discretizations of saddle point problems. We focus on the Stokes model with random data parametrized by uniformly distributed random variables and discuss well-posedness of the variational formulations. We introduce a Bramble-Pasciak conjugate gradient method as a linear solver. It builds on a non-standard inner product associated with a block triangular preconditioner. The block triangular structure enables more sophisticated preconditioners than the block diagonal structure usually applied in MINRES methods. We show how the existence requirements of a conjugate gradient method can be met in our setting. We analyze the performance of the solvers depending on relevant physical and numerical parameters by means of eigenvalue estimates. For this purpose, we derive bounds for the eigenvalues of the relevant preconditioned sub-matrices. We illustrate our findings using the flow in a driven cavity as a numerical test case, where the viscosity is given by a truncated Karhunen-Lo\`eve expansion of a random field. In this example, a Bramble-Pasciak conjugate gradient method with block triangular preconditioner outperforms a MINRES method with block diagonal preconditioner in terms of iteration numbers.Comment: 19 pages, 1 figure, submitted to SIAM JU

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT
    • …
    corecore