3,223 research outputs found

    Residual, restarting and Richardson iteration for the matrix exponential

    Get PDF
    A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We interpret the sought after vector as a value of a vector function satisfying the linear system of ordinary differential equations (ODE), whose coefficients form the given matrix. The residual is then defined with respect to the initial-value problem for this ODE system. The residual introduced in this way can be seen as a backward error. We show how the residual can efficiently be computed within several iterative methods for the matrix exponential. This completely resolves the question of reliable stopping criteria for these methods. Furthermore, we show that the residual concept can be used to construct new residual-based iterative methods. In particular, a variant of the Richardson method for the new residual appears to provide an efficient way to restart Krylov subspace methods for evaluating the matrix exponential.\u

    A numerical method to compute derivatives of functions of large complex matrices and its application to the overlap Dirac operator at finite chemical potential

    Full text link
    We present a method for the numerical calculation of derivatives of functions of general complex matrices. The method can be used in combination with any algorithm that evaluates or approximates the desired matrix function, in particular with implicit Krylov-Ritz-type approximations. An important use case for the method is the evaluation of the overlap Dirac operator in lattice Quantum Chromodynamics (QCD) at finite chemical potential, which requires the application of the sign function of a non-Hermitian matrix to some source vector. While the sign function of non-Hermitian matrices in practice cannot be efficiently approximated with source-independent polynomials or rational functions, sufficiently good approximating polynomials can still be constructed for each particular source vector. Our method allows for an efficient calculation of the derivatives of such implicit approximations with respect to the gauge field or other external parameters, which is necessary for the calculation of conserved lattice currents or the fermionic force in Hybrid Monte-Carlo or Langevin simulations. We also give an explicit deflation prescription for the case when one knows several eigenvalues and eigenvectors of the matrix being the argument of the differentiated function. We test the method for the two-sided Lanczos approximation of the finite-density overlap Dirac operator on realistic SU(3)SU(3) gauge field configurations on lattices with sizes as large as 14Ɨ14314\times14^3 and 6Ɨ1836\times18^3.Comment: 26 pages elsarticle style, 5 figures minor text changes, journal versio

    Inexact Arnoldi residual estimates and decay properties for functions of non-Hermitian matrices

    Get PDF
    We derive a priori residual-type bounds for the Arnoldi approximation of a matrix function and a strategy for setting the iteration accuracies in the inexact Arnoldi approximation of matrix functions. Such results are based on the decay behavior of the entries of functions of banded matrices. Specifically, we will use a priori decay bounds for the entries of functions of banded non-Hermitian matrices by using Faber polynomial series. Numerical experiments illustrate the quality of the results

    A black-box rational Arnoldi variant for Cauchy-Stieltjes matrix functions

    Get PDF
    Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy-Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix
    • ā€¦
    corecore