186 research outputs found
A framework for deflated and augmented Krylov subspace methods
We consider deflation and augmentation techniques for accelerating the
convergence of Krylov subspace methods for the solution of nonsingular linear
algebraic systems. Despite some formal similarity, the two techniques are
conceptually different from preconditioning. Deflation (in the sense the term
is used here) "removes" certain parts from the operator making it singular,
while augmentation adds a subspace to the Krylov subspace (often the one that
is generated by the singular operator); in contrast, preconditioning changes
the spectrum of the operator without making it singular. Deflation and
augmentation have been used in a variety of methods and settings. Typically,
deflation is combined with augmentation to compensate for the singularity of
the operator, but both techniques can be applied separately.
We introduce a framework of Krylov subspace methods that satisfy a Galerkin
condition. It includes the families of orthogonal residual (OR) and minimal
residual (MR) methods. We show that in this framework augmentation can be
achieved either explicitly or, equivalently, implicitly by projecting the
residuals appropriately and correcting the approximate solutions in a final
step. We study conditions for a breakdown of the deflated methods, and we show
several possibilities to avoid such breakdowns for the deflated MINRES method.
Numerical experiments illustrate properties of different variants of deflated
MINRES analyzed in this paper.Comment: 24 pages, 3 figure
Deflation for the off-diagonal block in symmetric saddle point systems
Deflation techniques are typically used to shift isolated clusters of small
eigenvalues in order to obtain a tighter distribution and a smaller condition
number. Such changes induce a positive effect in the convergence behavior of
Krylov subspace methods, which are among the most popular iterative solvers for
large sparse linear systems. We develop a deflation strategy for symmetric
saddle point matrices by taking advantage of their underlying block structure.
The vectors used for deflation come from an elliptic singular value
decomposition relying on the generalized Golub-Kahan bidiagonalization process.
The block targeted by deflation is the off-diagonal one since it features a
problematic singular value distribution for certain applications. One example
is the Stokes flow in elongated channels, where the off-diagonal block has
several small, isolated singular values, depending on the length of the
channel. Applying deflation to specific parts of the saddle point system is
important when using solvers such as CRAIG, which operates on individual blocks
rather than the whole system. The theory is developed by extending the existing
framework for deflating square matrices before applying a Krylov subspace
method like MINRES. Numerical experiments confirm the merits of our strategy
and lead to interesting questions about using approximate vectors for
deflation.Comment: 26 pages, 12 figure
Kronecker Product Approximation Preconditioners for Convection-diffusion Model Problems
We consider the iterative solution of the linear systems arising from four convection-diffusion model problems: the scalar convection-diffusion problem, Stokes problem, Oseen problem, and Navier-Stokes problem. We give the explicit Kronecker product structure of the coefficient matrices, especially the Kronecker product structure for the convection term. For the latter three model cases, the coefficient matrices have a blocks, and each block is a Kronecker product or a summation of several Kronecker products. We use the Kronecker products and block structures to design the diagonal block preconditioner, the tridiagonal block preconditioner and the constraint preconditioner. We can find that the constraint preconditioner can be regarded as the modification of the tridiagonal block preconditioner and the diagonal block preconditioner based on the cell Reynolds number. That's the reason why the constraint preconditioner is usually better. We also give numerical examples to show the efficiency of this kind of Kronecker product approximation preconditioners
Spectral estimates and preconditioning for saddle point systems arising from optimization problems
In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved
- …