254 research outputs found
A fast semi-direct least squares algorithm for hierarchically block separable matrices
We present a fast algorithm for linear least squares problems governed by
hierarchically block separable (HBS) matrices. Such matrices are generally
dense but data-sparse and can describe many important operators including those
derived from asymptotically smooth radial kernels that are not too oscillatory.
The algorithm is based on a recursive skeletonization procedure that exposes
this sparsity and solves the dense least squares problem as a larger,
equality-constrained, sparse one. It relies on a sparse QR factorization
coupled with iterative weighted least squares methods. In essence, our scheme
consists of a direct component, comprised of matrix compression and
factorization, followed by an iterative component to enforce certain equality
constraints. At most two iterations are typically required for problems that
are not too ill-conditioned. For an HBS matrix with
having bounded off-diagonal block rank, the algorithm has optimal complexity. If the rank increases with the spatial dimension as is
common for operators that are singular at the origin, then this becomes
in 1D, in 2D, and
in 3D. We illustrate the performance of the method on
both over- and underdetermined systems in a variety of settings, with an
emphasis on radial basis function approximation and efficient updating and
downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App
The Anderson model of localization: a challenge for modern eigenvalue methods
We present a comparative study of the application of modern eigenvalue
algorithms to an eigenvalue problem arising in quantum physics, namely, the
computation of a few interior eigenvalues and their associated eigenvectors for
the large, sparse, real, symmetric, and indefinite matrices of the Anderson
model of localization. We compare the Lanczos algorithm in the 1987
implementation of Cullum and Willoughby with the implicitly restarted Arnoldi
method coupled with polynomial and several shift-and-invert convergence
accelerators as well as with a sparse hybrid tridiagonalization method. We
demonstrate that for our problem the Lanczos implementation is faster and more
memory efficient than the other approaches. This seemingly innocuous problem
presents a major challenge for all modern eigenvalue algorithms.Comment: 16 LaTeX pages with 3 figures include
Globally convergent techniques in nonlinear Newton-Krylov
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces
Global Range Restricted GMRES for Linear Systems with Multiple Right Hand Sides
This work concerns the solution of non-symmetric, sparse linear systems with multiple right hand sides by iterative methods. Herein a global version of the range restricted generalized minimal residual method (RRGMRES) is proposed for solving this sort of problems. Numerical results confirm that this new algorithm is applicable
GMRES implementations and residual smoothing techniques for solving ill-posed linear systems
AbstractThere are verities of useful Krylov subspace methods to solve nonsymmetric linear system of equations. GMRES is one of the best Krylov solvers with several different variants to solve large sparse linear systems. Any GMRES implementation has some advantages. As the solution of ill-posed problems are important. In this paper, some GMRES variants are discussed and applied to solve these kinds of problems. Residual smoothing techniques are efficient ways to accelerate the convergence speed of some iterative methods like CG variants. At the end of this paper, some residual smoothing techniques are applied for different GMRES methods to test the influence of these techniques on GMRES implementations
- …