2,945 research outputs found
Least squares residuals and minimal residual methods
We study Krylov subspace methods for solving unsymmetric linear algebraic systems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basic identities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods.
Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571--581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than the Classical GMRES implementation due to Saad and Schultz [SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]
Scalability Analysis of Parallel GMRES Implementations
Applications involving large sparse nonsymmetric linear systems encourage parallel implementations of robust iterative solution methods, such as GMRES(k). Two parallel versions of GMRES(k) based on different data distributions and using Householder reflections in the orthogonalization phase, and variations of these which adapt the restart value k, are analyzed with respect to scalability (their ability to maintain fixed efficiency with an increase in problem size and number of processors).A theoretical algorithm-machine model for scalability is derived and validated by experiments on three parallel computers, each with different machine characteristics
An assessment of two classes of variational multiscale methods for the simulation of incompressible turbulent flows
A numerical assessment of two classes of variational multiscale (VMS) methods for the simulation of incompressible flows is presented. Two types of residual-based VMS methods and two types of projection-based VMS methods are included in this assessment. The numerical simulations are performed at turbulent channel flow problems with various friction Reynolds numbers. It turns out the the residual-based VMS methods, in particular when used with a pair of inf-sup stable finite elements, give usually the most accurate results for second order statistics. For this pair of finite element spaces, a flexible GMRES method with a Least Squares Commutator (LSC) preconditioner proved to be an efficient solver
Can coercive formulations lead to fast and accurate solution of the Helmholtz equation?
A new, coercive formulation of the Helmholtz equation was introduced in
[Moiola, Spence, SIAM Rev. 2014]. In this paper we investigate -version
Galerkin discretisations of this formulation, and the iterative solution of the
resulting linear systems. We find that the coercive formulation behaves
similarly to the standard formulation in terms of the pollution effect (i.e. to
maintain accuracy as , must decrease with at the same rate
as for the standard formulation). We prove -explicit bounds on the number of
GMRES iterations required to solve the linear system of the new formulation
when it is preconditioned with a prescribed symmetric positive-definite matrix.
Even though the number of iterations grows with , these are the first such
rigorous bounds on the number of GMRES iterations for a preconditioned
formulation of the Helmholtz equation, where the preconditioner is a symmetric
positive-definite matrix.Comment: 27 pages, 7 figure
A fast semi-direct least squares algorithm for hierarchically block separable matrices
We present a fast algorithm for linear least squares problems governed by
hierarchically block separable (HBS) matrices. Such matrices are generally
dense but data-sparse and can describe many important operators including those
derived from asymptotically smooth radial kernels that are not too oscillatory.
The algorithm is based on a recursive skeletonization procedure that exposes
this sparsity and solves the dense least squares problem as a larger,
equality-constrained, sparse one. It relies on a sparse QR factorization
coupled with iterative weighted least squares methods. In essence, our scheme
consists of a direct component, comprised of matrix compression and
factorization, followed by an iterative component to enforce certain equality
constraints. At most two iterations are typically required for problems that
are not too ill-conditioned. For an HBS matrix with
having bounded off-diagonal block rank, the algorithm has optimal complexity. If the rank increases with the spatial dimension as is
common for operators that are singular at the origin, then this becomes
in 1D, in 2D, and
in 3D. We illustrate the performance of the method on
both over- and underdetermined systems in a variety of settings, with an
emphasis on radial basis function approximation and efficient updating and
downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App
- …