632 research outputs found

    Preconditioning and convergence in the right norm

    Get PDF
    The convergence of numerical approximations to the solutions of differential equations is a key aspect of Numerical Analysis and Scientific Computing. Iterative solution methods for the systems of linear(ised) equations which often result are also underpinned by analyses of convergence. In the function space setting, it is widely appreciated that there are appropriate ways in which to assess convergence and it is well-known that different norms are not equivalent. In the finite dimensional linear algebra setting, however, all norms are equivalent and little attention is often payed to the norms used. In this paper, we highlight this consideration in the context of preconditioning for minimum residual methods (MINRES and GMRES/GCR/ORTHOMIN) and argue that even in the linear algebra setting there is a ‘right’ norm in which to consider convergence: stopping an iteration which is rapidly converging in an irrelevant or highly scaled norm at some tolerance level may still give a poor answer

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336

    On choice of preconditioner for minimum residual methods for nonsymmetric matrices

    Get PDF
    Existing convergence bounds for Krylov subspace methods such as GMRES for nonsymmetric linear systems give little mathematical guidance for the choice of preconditioner. Here, we establish a desirable mathematical property of a preconditioner which guarantees that convergence of a minimum residual method will essentially depend only on the eigenvalues of the preconditioned system, as is true in the symmetric case. Our theory covers only a subset of nonsymmetric coefficient matrices but computations indicate that it might be more generally applicable

    A Bramble-Pasciak conjugate gradient method for discrete Stokes equations with random viscosity

    Full text link
    We study the iterative solution of linear systems of equations arising from stochastic Galerkin finite element discretizations of saddle point problems. We focus on the Stokes model with random data parametrized by uniformly distributed random variables and discuss well-posedness of the variational formulations. We introduce a Bramble-Pasciak conjugate gradient method as a linear solver. It builds on a non-standard inner product associated with a block triangular preconditioner. The block triangular structure enables more sophisticated preconditioners than the block diagonal structure usually applied in MINRES methods. We show how the existence requirements of a conjugate gradient method can be met in our setting. We analyze the performance of the solvers depending on relevant physical and numerical parameters by means of eigenvalue estimates. For this purpose, we derive bounds for the eigenvalues of the relevant preconditioned sub-matrices. We illustrate our findings using the flow in a driven cavity as a numerical test case, where the viscosity is given by a truncated Karhunen-Lo\`eve expansion of a random field. In this example, a Bramble-Pasciak conjugate gradient method with block triangular preconditioner outperforms a MINRES method with block diagonal preconditioner in terms of iteration numbers.Comment: 19 pages, 1 figure, submitted to SIAM JU

    A framework for deflated and augmented Krylov subspace methods

    Get PDF
    We consider deflation and augmentation techniques for accelerating the convergence of Krylov subspace methods for the solution of nonsingular linear algebraic systems. Despite some formal similarity, the two techniques are conceptually different from preconditioning. Deflation (in the sense the term is used here) "removes" certain parts from the operator making it singular, while augmentation adds a subspace to the Krylov subspace (often the one that is generated by the singular operator); in contrast, preconditioning changes the spectrum of the operator without making it singular. Deflation and augmentation have been used in a variety of methods and settings. Typically, deflation is combined with augmentation to compensate for the singularity of the operator, but both techniques can be applied separately. We introduce a framework of Krylov subspace methods that satisfy a Galerkin condition. It includes the families of orthogonal residual (OR) and minimal residual (MR) methods. We show that in this framework augmentation can be achieved either explicitly or, equivalently, implicitly by projecting the residuals appropriately and correcting the approximate solutions in a final step. We study conditions for a breakdown of the deflated methods, and we show several possibilities to avoid such breakdowns for the deflated MINRES method. Numerical experiments illustrate properties of different variants of deflated MINRES analyzed in this paper.Comment: 24 pages, 3 figure

    A domain decomposing parallel sparse linear system solver

    Get PDF
    The solution of large sparse linear systems is often the most time-consuming part of many science and engineering applications. Computational fluid dynamics, circuit simulation, power network analysis, and material science are just a few examples of the application areas in which large sparse linear systems need to be solved effectively. In this paper we introduce a new parallel hybrid sparse linear system solver for distributed memory architectures that contains both direct and iterative components. We show that by using our solver one can alleviate the drawbacks of direct and iterative solvers, achieving better scalability than with direct solvers and more robustness than with classical preconditioned iterative solvers. Comparisons to well-known direct and iterative solvers on a parallel architecture are provided.Comment: To appear in Journal of Computational and Applied Mathematic
    • …
    corecore