1,265 research outputs found

    The effect of non-optimal bases on the convergence of Krylov subspace methods

    Get PDF
    There are many examples where non-orthogonality of a basis for Krylov subspace methods arises naturally. These methods usually require less storage or computational effort per iteration than methods using an orthonormal basis (optimal methods), but the convergence may be delayed. Truncated Krylov subspace methods and other examples of non-optimal methods have been shown to converge in many situations, often with small delay, but not in others. We explore the question of what is the effect of having a nonoptimal basis. We prove certain identities for the relative residual gap, i.e., the relative difference between the residuals of the optimal and non-optimal methods. These identities and related bounds provide insight into when the delay is small and convergence is achieved. Further understanding is gained by using a general theory of superlinear convergence recently developed. Our analysis confirms the observed fact that in exact arithmetic the orthogonality of the basis is not important, only the need to maintain linear independence is. Numerical examples illustrate our theoretical results

    Numerically Stable Recurrence Relations for the Communication Hiding Pipelined Conjugate Gradient Method

    Full text link
    Pipelined Krylov subspace methods (also referred to as communication-hiding methods) have been proposed in the literature as a scalable alternative to classic Krylov subspace algorithms for iteratively computing the solution to a large linear system in parallel. For symmetric and positive definite system matrices the pipelined Conjugate Gradient method outperforms its classic Conjugate Gradient counterpart on large scale distributed memory hardware by overlapping global communication with essential computations like the matrix-vector product, thus hiding global communication. A well-known drawback of the pipelining technique is the (possibly significant) loss of numerical stability. In this work a numerically stable variant of the pipelined Conjugate Gradient algorithm is presented that avoids the propagation of local rounding errors in the finite precision recurrence relations that construct the Krylov subspace basis. The multi-term recurrence relation for the basis vector is replaced by two-term recurrences, improving stability without increasing the overall computational cost of the algorithm. The proposed modification ensures that the pipelined Conjugate Gradient method is able to attain a highly accurate solution independently of the pipeline length. Numerical experiments demonstrate a combination of excellent parallel performance and improved maximal attainable accuracy for the new pipelined Conjugate Gradient algorithm. This work thus resolves one of the major practical restrictions for the useability of pipelined Krylov subspace methods.Comment: 15 pages, 5 figures, 1 table, 2 algorithm

    A nested Krylov subspace method to compute the sign function of large complex matrices

    Full text link
    We present an acceleration of the well-established Krylov-Ritz methods to compute the sign function of large complex matrices, as needed in lattice QCD simulations involving the overlap Dirac operator at both zero and nonzero baryon density. Krylov-Ritz methods approximate the sign function using a projection on a Krylov subspace. To achieve a high accuracy this subspace must be taken quite large, which makes the method too costly. The new idea is to make a further projection on an even smaller, nested Krylov subspace. If additionally an intermediate preconditioning step is applied, this projection can be performed without affecting the accuracy of the approximation, and a substantial gain in efficiency is achieved for both Hermitian and non-Hermitian matrices. The numerical efficiency of the method is demonstrated on lattice configurations of sizes ranging from 4^4 to 10^4, and the new results are compared with those obtained with rational approximation methods.Comment: 17 pages, 12 figures, minor corrections, extended analysis of the preconditioning ste

    Linear Algebraic Calculation of Green's function for Large-Scale Electronic Structure Theory

    Full text link
    A linear algebraic method named the shifted conjugate-orthogonal-conjugate-gradient method is introduced for large-scale electronic structure calculation. The method gives an iterative solver algorithm of the Green's function and the density matrix without calculating eigenstates.The problem is reduced to independent linear equations at many energy points and the calculation is actually carried out only for a single energy point. The method is robust against the round-off error and the calculation can reach the machine accuracy. With the observation of residual vectors, the accuracy can be controlled, microscopically, independently for each element of the Green's function, and dynamically, at each step in dynamical simulations. The method is applied to both semiconductor and metal.Comment: 10 pages, 9 figures. To appear in Phys. Rev. B. A PDF file with better graphics is available at http://fujimac.t.u-tokyo.ac.jp/lses
    • …
    corecore