135 research outputs found

    Non-strong uniqueness in real and complex Chebyshev approximation

    Get PDF

    An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    Get PDF
    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead

    A framework for deflated and augmented Krylov subspace methods

    Get PDF
    We consider deflation and augmentation techniques for accelerating the convergence of Krylov subspace methods for the solution of nonsingular linear algebraic systems. Despite some formal similarity, the two techniques are conceptually different from preconditioning. Deflation (in the sense the term is used here) "removes" certain parts from the operator making it singular, while augmentation adds a subspace to the Krylov subspace (often the one that is generated by the singular operator); in contrast, preconditioning changes the spectrum of the operator without making it singular. Deflation and augmentation have been used in a variety of methods and settings. Typically, deflation is combined with augmentation to compensate for the singularity of the operator, but both techniques can be applied separately. We introduce a framework of Krylov subspace methods that satisfy a Galerkin condition. It includes the families of orthogonal residual (OR) and minimal residual (MR) methods. We show that in this framework augmentation can be achieved either explicitly or, equivalently, implicitly by projecting the residuals appropriately and correcting the approximate solutions in a final step. We study conditions for a breakdown of the deflated methods, and we show several possibilities to avoid such breakdowns for the deflated MINRES method. Numerical experiments illustrate properties of different variants of deflated MINRES analyzed in this paper.Comment: 24 pages, 3 figure

    From qd to LR, or, how were the qd and LR algorithms discovered?

    Get PDF
    Perhaps, the most astonishing idea in eigenvalue computation is Rutishauser's idea of applying the LR transform to a matrix for generating a sequence of similar matrices that become more and more triangular. The same idea is the foundation of the ubiquitous QR algorithm. It is well known that this idea originated in Rutishauser's qd algorithm, which precedes the LR algorithm and can be understood as applying LR to a tridiagonal matrix. But how did Rutishauser discover qd and when did he find the qd-LR connection? We checked some of the early sources and have come up with an explanatio

    Lanczos-type solvers for nonsymmetric linear systems of equations

    Get PDF
    Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article introduces the reader not only to the basic forms of the Lanczos process and some of the related theory, but also describes in detail a number of solvers that are based on it, including those that are considered to be the most efficient ones. Possible breakdowns of the algorithms and ways to cure them by look-ahead are also discusse

    IDR explained

    Get PDF
    The Induced Dimension Reduction (IDR) method is a Krylov space method for solving linear systems that was developed by Peter Sonneveld around 1979. It was only noticed by few people, and mainly as the forerunner of Bi-CGSTAB, which was introduced a decade later. In 2007 Sonneveld and van Gijzen reconsidered IDR and generalized it to IDR(s), claiming that IDR(1) \approx IDR is equally fast but preferable to the closely related Bi-CGSTAB, and that IDR(s) with s > 1 may be much faster than Bi-CGSTAB. It also turned out that when s > 1, IDR(s) is related to ML(s)BiCGSTAB of Yeung and Chan, and that there is quite some flexibility in the IDR approach. This approach differs completely from traditional approaches to Krylov space methods, and therefore it requires an extra effort to get familiar with it and to understand the connections as well as the differences to better known Krylov space methods. This expository paper aims at providing some help in this and to make the method understandable even to non-experts. After presenting the history of IDR and related methods we summarize some of the basic facts on Krylov space methods. Then we present the original IDR(s) in detail and put it into perspective with other methods. Specifically, we analyze the differences between the IDR method published 1980, IDR(1) and Bi-CGSTAB. At the end, we discuss a recently proposed ingenious variant of IDR(s) whose residuals fulfill extra orthogonality conditions. There we dwell on details that have been left out in the publications of van Gijzen and Sonneveld

    Solving Theodorsen's Integral Equation for Conformal Maps with the Fast Fourier Transform and Various Nonlinear Iterative Methods

    Get PDF
    . We investigate several iterative methods for the numerical solution of Theodorsen's integral equation, the discretization of which is either based on trigonometric polynomials or function families with known attenuation factors. All our methods require simultaneous evaluations of a conjugate periodic function at each step and allow us to apply the fast Fourier transform for this. In particular, we discuss the nonlinear JOR iteration, the nonlinear SOR iteration, a nonlinear second order Euler iteration, the nonlinear Chebyshev semi-iterative method, and its cyclic variant. Under special symmetry conditions for the region to be mapped onto we establish local convergence in the case of discretization by trigonometric interpolation and give simple formulas for the optimal parameters (e.g., the underrelaxation factor) and the asymptotic convergence factor. Weaker related results for the general non-symmetric case are presented too. Practically, our methods extend the range of application of Theodorsen's method and improve its effectiveness strikingly

    Attenuation Factors in Multivariate Fourier Analysis

    Get PDF
    W. Gautschi's theory of attenuation factors for families of periodic functions in one variable is extended to families of functions in several variables. Again, the linearity and the translation invariance of the operator which maps the data space onto the family are crucial. Special results are obtained for tensor product families and for the interpolation by translates of one generating function. Interesting examples are provided by box splines, which include certain finite elements

    Updating the QR decomposition of block tridiagonal and block Hessenberg matrices

    Get PDF
    Abstract We present an efficient block-wise update scheme for the QR decomposition of block tridiagonal and block Hessenberg matrices. For example, such matrices come up in generalizations of the Krylov space solvers MinRes, SymmLQ, GMRes, and QMR to block methods for linear systems of equations with multiple right-hand sides. In the non-block case it is very efficient (and, in fact, standard) to use Givens rotations for these QR decompositions. Normally, the same approach is also used with column-wise updates in the block case. However, we show that, even for small block sizes, block-wise updates using (in general, complex) Householder reflections instead of Givens rotations are far more efficient in this case, in particular if the unitary transformations that incorporate the reflections determined by a whole block are computed explicitly. Naturally, the bigger the block size the bigger the savings. We discuss the somewhat complicated algorithmic details of this block-wise update, and present numerical experiments on accuracy and timing for the various options (Givens vs. Householder, block-wise vs. column-wise update, explicit vs. implicit computation of unitary transformations). Our treatment allows variable block sizes and can be adapted to block Hessenberg matrices that do not have the special structure encountered in the above mentioned block Krylov space solvers

    Look-ahead Levinson and Schur algorithms for non-Hermitian Toeplitz systems

    Full text link
    • …
    corecore