3,500 research outputs found
A nested Krylov subspace method to compute the sign function of large complex matrices
We present an acceleration of the well-established Krylov-Ritz methods to
compute the sign function of large complex matrices, as needed in lattice QCD
simulations involving the overlap Dirac operator at both zero and nonzero
baryon density. Krylov-Ritz methods approximate the sign function using a
projection on a Krylov subspace. To achieve a high accuracy this subspace must
be taken quite large, which makes the method too costly. The new idea is to
make a further projection on an even smaller, nested Krylov subspace. If
additionally an intermediate preconditioning step is applied, this projection
can be performed without affecting the accuracy of the approximation, and a
substantial gain in efficiency is achieved for both Hermitian and non-Hermitian
matrices. The numerical efficiency of the method is demonstrated on lattice
configurations of sizes ranging from 4^4 to 10^4, and the new results are
compared with those obtained with rational approximation methods.Comment: 17 pages, 12 figures, minor corrections, extended analysis of the
preconditioning ste
Numerical Methods for the QCD Overlap Operator: I. Sign-Function and Error Bounds
The numerical and computational aspects of the overlap formalism in lattice
quantum chromodynamics are extremely demanding due to a matrix-vector product
that involves the sign function of the hermitian Wilson matrix. In this paper
we investigate several methods to compute the product of the matrix
sign-function with a vector, in particular Lanczos based methods and partial
fraction expansion methods. Our goal is two-fold: we give realistic comparisons
between known methods together with novel approaches and we present error
bounds which allow to guarantee a given accuracy when terminating the Lanczos
method and the multishift-CG solver, applied within the partial fraction
expansion methods.Comment: 30 pages, 2 figure
A black-box rational Arnoldi variant for Cauchy-Stieltjes matrix functions
Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy-Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix
Rational Krylov approximation of matrix functions: Numerical methods and optimal pole selection
Matrix functions are a central topic of linear algebra, and problems of their numerical approximation appear increasingly often in scientific computing. We review various rational Krylov methods for the computation of large-scale matrix functions. Emphasis is put on the rational Arnoldi method and variants thereof, namely, the extended Krylov subspace method and the shift-and-invert Arnoldi method, but we also discuss the nonorthogonal generalized Leja point (or PAIN) method. The issue of optimal pole selection for rational Krylov methods applied for approximating the resolvent and exponential function, and functions of Markov type, is treated in some detail
Application of vector-valued rational approximations to the matrix eigenvalue problem and connections with Krylov subspace methods
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage
Spectral discretization errors in filtered subspace iteration
We consider filtered subspace iteration for approximating a cluster of
eigenvalues (and its associated eigenspace) of a (possibly unbounded)
selfadjoint operator in a Hilbert space. The algorithm is motivated by a
quadrature approximation of an operator-valued contour integral of the
resolvent. Resolvents on infinite dimensional spaces are discretized in
computable finite-dimensional spaces before the algorithm is applied. This
study focuses on how such discretizations result in errors in the eigenspace
approximations computed by the algorithm. The computed eigenspace is then used
to obtain approximations of the eigenvalue cluster. Bounds for the Hausdorff
distance between the computed and exact eigenvalue clusters are obtained in
terms of the discretization parameters within an abstract framework. A
realization of the proposed approach for a model second-order elliptic operator
using a standard finite element discretization of the resolvent is described.
Some numerical experiments are conducted to gauge the sharpness of the
theoretical estimates
- …