1,882 research outputs found

    Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices

    Full text link
    It is well known that as a famous type of iterative methods in numerical linear algebra, Gauss-Seidel iterative methods are convergent for linear systems with strictly or irreducibly diagonally dominant matrices, invertible H−H-matrices (generalized strictly diagonally dominant matrices) and Hermitian positive definite matrices. But, the same is not necessarily true for linear systems with nonstrictly diagonally dominant matrices and general H−H-matrices. This paper firstly proposes some necessary and sufficient conditions for convergence on Gauss-Seidel iterative methods to establish several new theoretical results on linear systems with nonstrictly diagonally dominant matrices and general H−H-matrices. Then, the convergence results on preconditioned Gauss-Seidel (PGS) iterative methods for general H−H-matrices are presented. Finally, some numerical examples are given to demonstrate the results obtained in this paper

    Hierarchical Schur complement preconditioner for the stochastic Galerkin finite element methods

    Full text link
    Use of the stochastic Galerkin finite element methods leads to large systems of linear equations obtained by the discretization of tensor product solution spaces along their spatial and stochastic dimensions. These systems are typically solved iteratively by a Krylov subspace method. We propose a preconditioner which takes an advantage of the recursive hierarchy in the structure of the global matrices. In particular, the matrices posses a recursive hierarchical two-by-two structure, with one of the submatrices block diagonal. Each one of the diagonal blocks in this submatrix is closely related to the deterministic mean-value problem, and the action of its inverse is in the implementation approximated by inner loops of Krylov iterations. Thus our hierarchical Schur complement preconditioner combines, on each level in the approximation of the hierarchical structure of the global matrix, the idea of Schur complement with loops for a number of mutually independent inner Krylov iterations, and several matrix-vector multiplications for the off-diagonal blocks. Neither the global matrix, nor the matrix of the preconditioner need to be formed explicitly. The ingredients include only the number of stiffness matrices from the truncated Karhunen-Lo\`{e}ve expansion and a good preconditioned for the mean-value deterministic problem. We provide a condition number bound for a model elliptic problem and the performance of the method is illustrated by numerical experiments.Comment: 15 pages, 2 figures, 9 tables, (updated numerical experiments

    Preconditioned conjugate-gradient methods for low-speed flow calculations

    Get PDF
    An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations is integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the Lower-Upper Successive Symmetric Over-Relaxation iterative scheme is more efficient than a preconditioner based on Incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional Line Gauss-Seidel Relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver
    • …
    corecore