233,002 research outputs found

    Deflated BiCGStab for linear equations in QCD problems

    Get PDF
    The large systems of complex linear equations that are generated in QCD problems often have multiple right-hand sides (for multiple sources) and multiple shifts (for multiple masses). Deflated GMRES methods have previously been developed for solving multiple right-hand sides. Eigenvectors are generated during solution of the first right-hand side and used to speed up convergence for the other right-hand sides. Here we discuss deflating non-restarted methods such as BiCGStab. For effective deflation, both left and right eigenvectors are needed. Fortunately, with the Wilson matrix, left eigenvectors can be derived from the right eigenvectors. We demonstrate for difficult problems with kappa near kappa_c that deflating eigenvalues can significantly improve BiCGStab. We also will look at improving solution of twisted mass problems with multiple shifts. Projecting over previous solutions is an easy way to reduce the work needed.Comment: 7 pages, 4 figures, presented at the XXV International Symposium on Lattice Field Theory, 30 July - 4 August 2007, Regensburg, German

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    An Efficient Parallel Solver for SDD Linear Systems

    Full text link
    We present the first parallel algorithm for solving systems of linear equations in symmetric, diagonally dominant (SDD) matrices that runs in polylogarithmic time and nearly-linear work. The heart of our algorithm is a construction of a sparse approximate inverse chain for the input matrix: a sequence of sparse matrices whose product approximates its inverse. Whereas other fast algorithms for solving systems of equations in SDD matrices exploit low-stretch spanning trees, our algorithm only requires spectral graph sparsifiers

    Improving Performance of Iterative Methods by Lossy Checkponting

    Get PDF
    Iterative methods are commonly used approaches to solve large, sparse linear systems, which are fundamental operations for many modern scientific simulations. When the large-scale iterative methods are running with a large number of ranks in parallel, they have to checkpoint the dynamic variables periodically in case of unavoidable fail-stop errors, requiring fast I/O systems and large storage space. To this end, significantly reducing the checkpointing overhead is critical to improving the overall performance of iterative methods. Our contribution is fourfold. (1) We propose a novel lossy checkpointing scheme that can significantly improve the checkpointing performance of iterative methods by leveraging lossy compressors. (2) We formulate a lossy checkpointing performance model and derive theoretically an upper bound for the extra number of iterations caused by the distortion of data in lossy checkpoints, in order to guarantee the performance improvement under the lossy checkpointing scheme. (3) We analyze the impact of lossy checkpointing (i.e., extra number of iterations caused by lossy checkpointing files) for multiple types of iterative methods. (4)We evaluate the lossy checkpointing scheme with optimal checkpointing intervals on a high-performance computing environment with 2,048 cores, using a well-known scientific computation package PETSc and a state-of-the-art checkpoint/restart toolkit. Experiments show that our optimized lossy checkpointing scheme can significantly reduce the fault tolerance overhead for iterative methods by 23%~70% compared with traditional checkpointing and 20%~58% compared with lossless-compressed checkpointing, in the presence of system failures.Comment: 14 pages, 10 figures, HPDC'1

    Hardness Results for Structured Linear Systems

    Full text link
    We show that if the nearly-linear time solvers for Laplacian matrices and their generalizations can be extended to solve just slightly larger families of linear systems, then they can be used to quickly solve all systems of linear equations over the reals. This result can be viewed either positively or negatively: either we will develop nearly-linear time algorithms for solving all systems of linear equations over the reals, or progress on the families we can solve in nearly-linear time will soon halt
    • …
    corecore