862 research outputs found
Error estimators and their analysis for CG, Bi-CG and GMRES
We present an analysis of the uncertainty in the convergence of iterative
linear solvers when using relative residue as a stopping criterion, and the
resulting over/under computation for a given tolerance in error. This shows
that error estimation is indispensable for efficient and accurate solution of
moderate to high conditioned linear systems (), where is
the condition number of the matrix. An error estimator for
iterations of the CG (Conjugate Gradient) algorithm was proposed more than two
decades ago. Recently, an error estimator was described for
the GMRES (Generalized Minimal Residual) algorithm which allows for
non-symmetric linear systems as well, where is the iteration number. We
suggest a minor modification in this GMRES error estimation for increased
stability. In this work, we also propose an error estimator
for A-norm and norm of the error vector in Bi-CG (Bi-Conjugate
Gradient) algorithm. The robust performance of these estimates as a stopping
criterion results in increased savings and accuracy in computation, as
condition number and size of problems increase
QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported
A biconjugate gradient type algorithm on massively parallel architectures
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine
Multilevel Solvers for Unstructured Surface Meshes
Parameterization of unstructured surface meshes is of fundamental importance in many applications of digital geometry processing. Such parameterization approaches give rise to large and exceedingly ill-conditioned systems which are difficult or impossible to solve without the use of sophisticated multilevel preconditioning strategies. Since the underlying meshes are very fine to begin with, such multilevel preconditioners require mesh coarsening to build an appropriate hierarchy. In this paper we consider several strategies for the construction of hierarchies using ideas from mesh simplification algorithms used in the computer graphics literature. We introduce two novel hierarchy construction schemes and demonstrate their superior performance when used in conjunction with a multigrid preconditioner
On choice of preconditioner for minimum residual methods for nonsymmetric matrices
Existing convergence bounds for Krylov subspace methods such as GMRES for nonsymmetric linear systems give little mathematical guidance for the choice of preconditioner. Here, we establish a desirable mathematical property of a preconditioner which guarantees that convergence of a minimum residual method will essentially depend only on the eigenvalues of the preconditioned system, as is true in the symmetric case. Our theory covers only a subset of nonsymmetric coefficient matrices but computations indicate that it might be more generally applicable
70 years of Krylov subspace methods: The journey continues
Using computed examples for the Conjugate Gradient method and GMRES, we
recall important building blocks in the understanding of Krylov subspace
methods over the last 70 years. Each example consists of a description of the
setup and the numerical observations, followed by an explanation of the
observed phenomena, where we keep technical details as small as possible. Our
goal is to show the mathematical beauty and hidden intricacies of the methods,
and to point out some persistent misunderstandings as well as important open
problems. We hope that this work initiates further investigations of Krylov
subspace methods, which are efficient computational tools and exciting
mathematical objects that are far from being fully understood.Comment: 38 page
Analyzing the effect of local rounding error propagation on the maximal attainable accuracy of the pipelined Conjugate Gradient method
Pipelined Krylov subspace methods typically offer improved strong scaling on
parallel HPC hardware compared to standard Krylov subspace methods for large
and sparse linear systems. In pipelined methods the traditional synchronization
bottleneck is mitigated by overlapping time-consuming global communications
with useful computations. However, to achieve this communication hiding
strategy, pipelined methods introduce additional recurrence relations for a
number of auxiliary variables that are required to update the approximate
solution. This paper aims at studying the influence of local rounding errors
that are introduced by the additional recurrences in the pipelined Conjugate
Gradient method. Specifically, we analyze the impact of local round-off effects
on the attainable accuracy of the pipelined CG algorithm and compare to the
traditional CG method. Furthermore, we estimate the gap between the true
residual and the recursively computed residual used in the algorithm. Based on
this estimate we suggest an automated residual replacement strategy to reduce
the loss of attainable accuracy on the final iterative solution. The resulting
pipelined CG method with residual replacement improves the maximal attainable
accuracy of pipelined CG, while maintaining the efficient parallel performance
of the pipelined method. This conclusion is substantiated by numerical results
for a variety of benchmark problems.Comment: 26 pages, 6 figures, 2 tables, 4 algorithm
Recent advances in Lanczos-based iterative methods for nonsymmetric linear systems
In recent years, there has been a true revival of the nonsymmetric Lanczos method. On the one hand, the possible breakdowns in the classical algorithm are now better understood, and so-called look-ahead variants of the Lanczos process have been developed, which remedy this problem. On the other hand, various new Lanczos-based iterative schemes for solving nonsymmetric linear systems have been proposed. This paper gives a survey of some of these recent developments
- …