97 research outputs found
The Faber–Manteuffel theorem for linear operators
A short recurrence for orthogonalizing Krylov subspace bases for a matrix A exists if and only if the adjoint of A is a low-degree polynomial in A (i.e., A is normal of low degree). In the area of iterative methods, this result is known as the Faber–Manteuffel theorem [V. Faber and T. Manteuffel, SIAM J. Numer. Anal., 21 (1984), pp. 352–362]. Motivated by the description by J. Liesen and Z. Strakoš, we formulate here this theorem in terms of linear operators on finite dimensional Hilbert spaces and give two new proofs of the necessity part. We have chosen the linear operator rather than the matrix formulation because we found that a matrix-free proof is less technical. Of course, the linear operator result contains the Faber–Manteuffel theorem for matrices
On the theory of equivalent operators and application to the numerical solution of uniformly elliptic partial differential equations
AbstractThis work is motivated by the preconditioned iterative solution of linear systems that arise from the discretization of uniformly elliptic partial differential equations. Iterative methods with bounds independent of the discretization are possible only if the preconditioning strategy is based upon equivalent operators. The operators A, B: W → V are said to be V norm equivalent if ∥Au∥v∥Bu∥v is bounded above and below by positive constants for u ϵ D, where D is “sufficiently dense.” If A is V norm equivalent to B, then for certain discretization strategies one can use B to construct a preconditioned iterative scheme for the approximate solution of the problem Au = F. The iteration will require an amount of work that is at most a constant times the work required to approximately solve the problem Bû = \̂tf to reduce the V norm of the error by a fixed factor. This paper develops the theory of equivalent operators on Hubert spaces. Then, the theory is applied to uniformly elliptic operators. Both the strong and weak forms are considered. Finally, finite element and finite difference discretizations are examined
On choice of preconditioner for minimum residual methods for nonsymmetric matrices
Existing convergence bounds for Krylov subspace methods such as GMRES for nonsymmetric linear systems give little mathematical guidance for the choice of preconditioner. Here, we establish a desirable mathematical property of a preconditioner which guarantees that convergence of a minimum residual method will essentially depend only on the eigenvalues of the preconditioned system, as is true in the symmetric case. Our theory covers only a subset of nonsymmetric coefficient matrices but computations indicate that it might be more generally applicable
On optimal short recurrences for generating orthogonal Krylov subspace bases
We analyze necessary and sufficient conditions on a nonsingular matrix A such that, for any initial vector , an orthogonal basis of the Krylov subspaces is generated by a short recurrence. Orthogonality here is meant with respect to some unspecified positive definite inner product. This question is closely related to the question of existence of optimal Krylov subspace solvers for linear algebraic systems, where optimal means the smallest possible error in the norm induced by the given inner product. The conditions on A we deal with were first derived and characterized more than 20 years ago by Faber and Manteuffel (SIAM J. Numer. Anal., 21 (1984), pp. 352–362). Their main theorem is often quoted and appears to be widely known. Its details and underlying concepts, however, are quite intricate, with some subtleties not covered in the literature we are aware of. Our paper aims to present and clarify the existing important results in the context of the Faber–Manteuffel theorem. Furthermore, we review attempts to find an easier proof of the theorem and explain what remains to be done in order to complete that task
A comparison of Krylov methods for Shifted Skew-Symmetric Systems
It is well known that for general linear systems, only optimal Krylov methods
with long recurrences exist. For special classes of linear systems it is
possible to find optimal Krylov methods with short recurrences. In this paper
we consider the important class of linear systems with a shifted skew-symmetric
coefficient matrix. We present the MRS3 solver, a minimal residual method that
solves these problems using short vector recurrences. We give an overview of
existing Krylov solvers that can be used to solve these problems, and compare
them with the MRS3 method, both theoretically and by numerical experiments.
From this comparison we argue that the MRS3 solver is the fastest and most
robust of these Krylov method for systems with a shifted skew-symmetric
coefficient matrix.Comment: 23 pages, 3 figure
QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported
- …