344 research outputs found
Computing and deflating eigenvalues while solving multiple right hand side linear systems in Quantum Chromodynamics
We present a new algorithm that computes eigenvalues and eigenvectors of a
Hermitian positive definite matrix while solving a linear system of equations
with Conjugate Gradient (CG). Traditionally, all the CG iteration vectors could
be saved and recombined through the eigenvectors of the tridiagonal projection
matrix, which is equivalent theoretically to unrestarted Lanczos. Our algorithm
capitalizes on the iteration vectors produced by CG to update only a small
window of vectors that approximate the eigenvectors. While this window is
restarted in a locally optimal way, the CG algorithm for the linear system is
unaffected. Yet, in all our experiments, this small window converges to the
required eigenvectors at a rate identical to unrestarted Lanczos. After the
solution of the linear system, eigenvectors that have not accurately converged
can be improved in an incremental fashion by solving additional linear systems.
In this case, eigenvectors identified in earlier systems can be used to
deflate, and thus accelerate, the convergence of subsequent systems. We have
used this algorithm with excellent results in lattice QCD applications, where
hundreds of right hand sides may be needed. Specifically, about 70 eigenvectors
are obtained to full accuracy after solving 24 right hand sides. Deflating
these from the large number of subsequent right hand sides removes the dreaded
critical slowdown, where the conditioning of the matrix increases as the quark
mass reaches a critical value. Our experiments show almost a constant number of
iterations for our method, regardless of quark mass, and speedups of 8 over
original CG for light quark masses.Comment: 22 pages, 26 eps figure
Deflated Iterative Methods for Linear Equations with Multiple Right-Hand Sides
A new approach is discussed for solving large nonsymmetric systems of linear
equations with multiple right-hand sides. The first system is solved with a
deflated GMRES method that generates eigenvector information at the same time
that the linear equations are solved. Subsequent systems are solved by
combining restarted GMRES with a projection over the previously determined
eigenvectors. This approach offers an alternative to block methods, and it can
also be combined with a block method. It is useful when there are a limited
number of small eigenvalues that slow the convergence. An example is given
showing significant improvement for a problem from quantum chromodynamics. The
second and subsequent right-hand sides are solved much quicker than without the
deflation. This new approach is relatively simple to implement and is very
efficient compared to other deflation methods.Comment: 13 pages, 5 figure
Restarted Hessenberg method for solving shifted nonsymmetric linear systems
It is known that the restarted full orthogonalization method (FOM)
outperforms the restarted generalized minimum residual (GMRES) method in
several circumstances for solving shifted linear systems when the shifts are
handled simultaneously. Many variants of them have been proposed to enhance
their performance. We show that another restarted method, the restarted
Hessenberg method [M. Heyouni, M\'ethode de Hessenberg G\'en\'eralis\'ee et
Applications, Ph.D. Thesis, Universit\'e des Sciences et Technologies de Lille,
France, 1996] based on Hessenberg procedure, can effectively be employed, which
can provide accelerating convergence rate with respect to the number of
restarts. Theoretical analysis shows that the new residual of shifted restarted
Hessenberg method is still collinear with each other. In these cases where the
proposed algorithm needs less enough CPU time elapsed to converge than the
earlier established restarted shifted FOM, weighted restarted shifted FOM, and
some other popular shifted iterative solvers based on the short-term vector
recurrence, as shown via extensive numerical experiments involving the recent
popular applications of handling the time fractional differential equations.Comment: 19 pages, 7 tables. Some corrections for updating the reference
Deflated GMRES for Systems with Multiple Shifts and Multiple Right-Hand Sides
We consider solution of multiply shifted systems of nonsymmetric linear
equations, possibly also with multiple right-hand sides. First, for a single
right-hand side, the matrix is shifted by several multiples of the identity.
Such problems arise in a number of applications, including lattice quantum
chromodynamics where the matrices are complex and non-Hermitian. Some Krylov
iterative methods such as GMRES and BiCGStab have been used to solve multiply
shifted systems for about the cost of solving just one system. Restarted GMRES
can be improved by deflating eigenvalues for matrices that have a few small
eigenvalues. We show that a particular deflated method, GMRES-DR, can be
applied to multiply shifted systems. In quantum chromodynamics, it is common to
have multiple right-hand sides with multiple shifts for each right-hand side.
We develop a method that efficiently solves the multiple right-hand sides by
using a deflated version of GMRES and yet keeps costs for all of the multiply
shifted systems close to those for one shift. An example is given showing this
can be extremely effective with a quantum chromodynamics matrix.Comment: 19 pages, 9 figure
A flexible and adaptive Simpler GMRES with deflated restarting for shifted linear systems
In this paper, two efficient iterative algorithms based on the simpler GMRES
method are proposed for solving shifted linear systems. To make full use of the
shifted structure, the proposed algorithms utilizing the deflated restarting
strategy and flexible preconditioning can significantly reduce the number of
matrix-vector products and the elapsed CPU time. Numerical experiments are
reported to illustrate the performance and effectiveness of the proposed
algorithms.Comment: 17 pages. 9 Tables, 1 figure; Newly update: add some new numerical
results and correct some typos and syntax error
Galerkin Projection Methods for Solving Multiple Linear Systems
In this paper, we consider using conjugate gradient (CG) methods for solving multiple linear systems A(i) x(i) = b(i) , for 1 ≤ i ≤ s, where the coefficient matrices A(i) and the right-hand sides b( i) are different in general. In particular, we focus on the seed projection method which generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated until all the systems are solved. Most papers in the literature [T. F. Chan and W. L. Wan, SIAM J. Sci. Comput., 18 (1997), pp. 16981721; B. Parlett Linear Algebra Appl., 29 (1980), pp. 323346; Y. Saad, Math. Comp., 48 (1987), pp. 651662; V. Simoncini and E. Gallopoulos, SIAM J. Sci. Comput., 16 (1995), pp. 917933; C. Smith, A. Peterson, and R. Mittra, IEEE Trans. Antennas and Propagation, 37 (1989), pp. 14901493] considered only the case where the coefficient matrices A( i) are the same but the right-hand sides are different. We extend and analyze the method to solve multiple linear systems with varying coefficient matrices and right-hand sides. A theoretical error bound is given for the approximation obtained from a projection process onto a Krylov subspace generated from solving a previous linear system. Finally, numerical results for multiple linear systems arising from image restorations and recursive least squares computations are reported to illustrate the effectiveness of the method.published_or_final_versio
Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections
This work focuses on the iterative solution of sequences of KKT linear
systems arising in interior point methods applied to large convex quadratic
programming problems. This task is the computational core of the interior point
procedure and an efficient preconditioning strategy is crucial for the
efficiency of the overall method. Constraint preconditioners are very effective
in this context; nevertheless, their computation may be very expensive for
large-scale problems, and resorting to approximations of them may be
convenient. Here we propose a procedure for building inexact constraint
preconditioners by updating a "seed" constraint preconditioner computed for a
KKT matrix at a previous interior point iteration. These updates are obtained
through low-rank corrections of the Schur complement of the (1,1) block of the
seed preconditioner. The updated preconditioners are analyzed both
theoretically and computationally. The results obtained show that our updating
procedure, coupled with an adaptive strategy for determining whether to
reinitialize or update the preconditioner, can enhance the performance of
interior point methods on large problems.Comment: 22 page
Deflation for inversion with multiple right-hand sides in QCD
Most calculations in lattice Quantum Chromodynamics (QCD) involve the solution of a series of linear systems of equations with exceedingly large matrices and a large number of right hand sides. Iterative methods for these problems can be sped up significantly if we deflate approximations of appropriate invariant spaces from the initial guesses. Recently we have developed eigCG, a modification of the Conjugate Gradient (CG) method, which while solving a linear system can reuse a window of the CG vectors to compute eigenvectors almost as accurately as the Lanczos method. The number of approximate eigenvectors can increase as more systems are solved. In this paper we review some of the characteristics of eigCG and show how it helps remove the critical slowdown in QCD calculations. Moreover, we study scaling with lattice volume and an extension of the technique to nonsymmetric problems
- …