43 research outputs found

    Convergence and round-off errors in a two-dimensional eigenvalue problem using spectral methods and Arnoldi-Chebyshev algorithm

    Get PDF
    An efficient way of solving 2D stability problems in fluid mechanics is to use, after discretization of the equations that cast the problem in the form of a generalized eigenvalue problem, the incomplete Arnoldi-Chebyshev method. This method preserves the banded structure sparsity of matrices of the algebraic eigenvalue problem and thus decreases memory use and CPU-time consumption. The errors that affect computed eigenvalues and eigenvectors are due to the truncation in the discretization and to finite precision in the computation of the discretized problem. In this paper we analyze those two errors and the interplay between them. We use as a test case the two-dimensional eigenvalue problem yielded by the computation of inertial modes in a spherical shell. This problem contains many difficulties that make it a very good test case. It turns out that that single modes (especially most-damped modes i.e. with high spatial frequency) can be very sensitive to round-off errors, even when apparently good spectral convergence is achieved. The influence of round-off errors is analyzed using the spectral portrait technique and by comparison of double precision and extended precision computations. Through the analysis we give practical recipes to control the truncation and round-off errors on eigenvalues and eigenvectors.Comment: 15 pages, 9 figure

    Complete Iterative Method for Computing Pseudospectra

    No full text
    Efficient codes for computing pseudospectra of large sparse matrices usually use a Lanczos type method with the shift and invert technique and a shift equal to zero. Then, these codes are very efficient for computing pseudospectra on regions where the matrix is nonnormal (because k(A \Gamma zI) \Gamma1 k2 is large) but they lose their efficiency when they compute pseudospectra on regions where the spectrum of A is not sensitive (k(A \Gamma zI) \Gamma1 k2 is small). A way to overcome this loss of efficiency using only iterative methods associated with an adaptive shift is proposed. 1 Introduction The "-pseudoeigenvalue and "-pseudospectrum are defined as: ffl is an "-pseudoeigenvalue of A if is an eigenvalue of A+ E with kEk 2 "kAk 2 ffl The "-pseudospectrum of A is defined by " (A) = fz 2 l C ; z is an "\Gammapseudoeigenvalue of Ag For a fixed ", the contour of " (A) can be defined as fz 2 l C ; kAk 2 k(A \Gamma zI) \Gamma1 k 2 = " \Gamma1 g. The graphical representati..

    Eigenvalue bounds from the Schur form

    No full text
    Computing the partial Schur form of a matrix is a common kernel in widely used software for solving eigenvalues problems. Partial Schur forms and Schur vectors also arise naturally in deflation techniques. In this paper, error bounds are proposed which are based on the Schur form of a matrix. We show how the bounds derived for the general case simplify in special situations such as those of Hermitian matrices or partially normal or nearly normal matrices. The derived bounds are similar to well-known bounds such as the Kato-Temple and the BauerFike inequalities

    Comparative Behaviour of Eigensolvers on Highly Nonnormal Matrices

    No full text
    The bad numerical behaviour of iterative solvers for linear systems when applied to highly nonnormal matrices is known and has already been studied. In this paper, we study the influence of the departure from normality on three iterative eigensolvers : the QR algorithm, the Tchebycheff subspace iteration algorithm and the Arnoldi Tchebycheff algorithm. We compare these algorithms with respect to two different backward stabilities : the backward stability of the eigenproblem Ax = x associated with a computed eigenpair, and the backward stability associated with the projection on an approximate invariant subspace. We compute the backward error formulae associated with the Arnoldi factorization implemented with the Householder algorithm, the Givens algorithm and the iterative modified Gram-Schmidt algorithm. 1 Introduction The departure from normality of a matrix plays an essential role in numerical matrix computations. The bad numerical behaviour of highly nonnormal matrices has been kn..

    Stopping Criteria for Eigensolvers

    No full text
    In most iterative methods for solving linear systems, the stopping criterion is based on the backward error. In this paper, after having recalled the situation for linear systems and showed that backward analysis can be used on eigenproblems, we present similar stopping criteria for eigensolvers. We also present link between the backward error and the forward error using the condition number. Numerical experiments with symmetric eigensolvers (Jacobi and Lanczos method) and with nonsymmetric ones (QR algorithm, subspace iteration and Arnoldi-Tchebycheff method) illustrate the choice of the backward error. 1 Introduction Physical problems that scientists want to model on a computer often lead to a numerical problem which corresponds to either : 1. solving a linear system, i.e., given a matrix A and a vector b, find x such that Ax = b, or 2. solving an eigenproblem, i.e., given a matrix A, find the eigenpair (; x) such that Ax = x and x 6= 0. For the first class of problems, one can use..

    Computing the field of values and pseudospectra using the Lanczos method with continuation

    No full text
    The field of values and pseudospectra are useful tools for understanding the behaviour of various matrix processes. To compute these subsets of the complex plane it is necessary to estimate one or two eigenvalues of a large number of parametrized Hermitian matrices; these computations are prohibitively expensive for large, possibly sparse, matrices, if done by use of the QR algorithm. We describe an approach based on the Lanczos method with selective reorthogonalization and Chebyshev acceleration that, when combined with continuation and a shift and invert technique, enables efficient and reliable computation of the field of values and pseudospectra for large matrices. The idea of using the Lanczos method with continuation to compute pseudospectra is not new, but in experiments reported here our algorithm is faster and more accurate than existing algorithms of this type

    A Parallelizable Preconditioner for The Iterative Solution of Implicit Runge-Kutta Type Methods

    Get PDF
    The main difficulty in the implementation of most standard implicit Runge-Kutta (IRK) methods applied to (stiff) ordinary differential equations (ODE's) is to efficiently solve the nonlinear system of equations. In this article we propose the use of a preconditioner whose decomposition cost for a parallel implementation is equivalent to the cost for the implicit Euler method. The preconditioner is based on the W-transformation of the RK coefficient matrices discovered by Hairer and Wanner. For stiff ODE's the preconditioner is by construction asymptotically exact for methods with an invertible RK coefficient matrix. The methodology is particularly useful when applied to super partitioned additive Runge-Kutta (SPARK) methods. The nonlinear system can be solved by inexact simplified Newton iterations: at each simplified Newton step the linear system can be approximately solved by an iterative method applied to the preconditioned linear system. Key words: Implicit Runge-Kutta methods, ine..

    The Influence of Large Nonnormality on the Quality of Convergence of Iterative Methods in Linear Algebra

    No full text
    The departure from normality of a matrix plays an essential role in the numerical matrix computations. The bad numerical behaviour of highly nonnormal matrices has been known for a long time ([14], [25], [5]). But this first effect of high nonnormality i.e. the increase of the spectral instability was considered by practionners as a mathematical oddity, since such matrices were not often encountered in practice. Even the most recent textbooks for engineers on eigenvalue computations, such as [19], do not warn the reader against such a possible difficulty. However, the present-day computers make large-scale problems tractable and allow the engineers to elaborate more and more complex and realistic models for physical phenomena. It seems that now, more and more matrices that model physical problems at the edge of instability arise ([16], [18], [9]), which have a possibly unbounded departure from normality, and they challenge many robust numerical codes because of a second - and newly ana..
    corecore