387 research outputs found

    Numerically Stable Recurrence Relations for the Communication Hiding Pipelined Conjugate Gradient Method

    Full text link
    Pipelined Krylov subspace methods (also referred to as communication-hiding methods) have been proposed in the literature as a scalable alternative to classic Krylov subspace algorithms for iteratively computing the solution to a large linear system in parallel. For symmetric and positive definite system matrices the pipelined Conjugate Gradient method outperforms its classic Conjugate Gradient counterpart on large scale distributed memory hardware by overlapping global communication with essential computations like the matrix-vector product, thus hiding global communication. A well-known drawback of the pipelining technique is the (possibly significant) loss of numerical stability. In this work a numerically stable variant of the pipelined Conjugate Gradient algorithm is presented that avoids the propagation of local rounding errors in the finite precision recurrence relations that construct the Krylov subspace basis. The multi-term recurrence relation for the basis vector is replaced by two-term recurrences, improving stability without increasing the overall computational cost of the algorithm. The proposed modification ensures that the pipelined Conjugate Gradient method is able to attain a highly accurate solution independently of the pipeline length. Numerical experiments demonstrate a combination of excellent parallel performance and improved maximal attainable accuracy for the new pipelined Conjugate Gradient algorithm. This work thus resolves one of the major practical restrictions for the useability of pipelined Krylov subspace methods.Comment: 15 pages, 5 figures, 1 table, 2 algorithm

    Analyzing the effect of local rounding error propagation on the maximal attainable accuracy of the pipelined Conjugate Gradient method

    Get PDF
    Pipelined Krylov subspace methods typically offer improved strong scaling on parallel HPC hardware compared to standard Krylov subspace methods for large and sparse linear systems. In pipelined methods the traditional synchronization bottleneck is mitigated by overlapping time-consuming global communications with useful computations. However, to achieve this communication hiding strategy, pipelined methods introduce additional recurrence relations for a number of auxiliary variables that are required to update the approximate solution. This paper aims at studying the influence of local rounding errors that are introduced by the additional recurrences in the pipelined Conjugate Gradient method. Specifically, we analyze the impact of local round-off effects on the attainable accuracy of the pipelined CG algorithm and compare to the traditional CG method. Furthermore, we estimate the gap between the true residual and the recursively computed residual used in the algorithm. Based on this estimate we suggest an automated residual replacement strategy to reduce the loss of attainable accuracy on the final iterative solution. The resulting pipelined CG method with residual replacement improves the maximal attainable accuracy of pipelined CG, while maintaining the efficient parallel performance of the pipelined method. This conclusion is substantiated by numerical results for a variety of benchmark problems.Comment: 26 pages, 6 figures, 2 tables, 4 algorithm

    A Lanczos Method for Approximating Composite Functions

    Full text link
    We seek to approximate a composite function h(x) = g(f(x)) with a global polynomial. The standard approach chooses points x in the domain of f and computes h(x) at each point, which requires an evaluation of f and an evaluation of g. We present a Lanczos-based procedure that implicitly approximates g with a polynomial of f. By constructing a quadrature rule for the density function of f, we can approximate h(x) using many fewer evaluations of g. The savings is particularly dramatic when g is much more expensive than f or the dimension of x is large. We demonstrate this procedure with two numerical examples: (i) an exponential function composed with a rational function and (ii) a Navier-Stokes model of fluid flow with a scalar input parameter that depends on multiple physical quantities

    Smoothed Analysis for the Conjugate Gradient Algorithm

    Full text link
    The purpose of this paper is to establish bounds on the rate of convergence of the conjugate gradient algorithm when the underlying matrix is a random positive definite perturbation of a deterministic positive definite matrix. We estimate all finite moments of a natural halting time when the random perturbation is drawn from the Laguerre unitary ensemble in a critical scaling regime explored in Deift et al. (2016). These estimates are used to analyze the expected iteration count in the framework of smoothed analysis, introduced by Spielman and Teng (2001). The rigorous results are compared with numerical calculations in several cases of interest

    70 years of Krylov subspace methods: The journey continues

    Full text link
    Using computed examples for the Conjugate Gradient method and GMRES, we recall important building blocks in the understanding of Krylov subspace methods over the last 70 years. Each example consists of a description of the setup and the numerical observations, followed by an explanation of the observed phenomena, where we keep technical details as small as possible. Our goal is to show the mathematical beauty and hidden intricacies of the methods, and to point out some persistent misunderstandings as well as important open problems. We hope that this work initiates further investigations of Krylov subspace methods, which are efficient computational tools and exciting mathematical objects that are far from being fully understood.Comment: 38 page

    The Lanczos and conjugate gradient algorithms in finite precision arithmetic

    Full text link
    corecore