30 research outputs found

    Vector continued fraction algorithms.

    Get PDF
    We consider the construction of rational approximations to given power series whose coefficients are vectors. The approximants are in the form of vector-valued continued fractions which may be used to obtain vector Padeapproximants using recurrence relations. Algorithms for the determination of the vector elements of these fractions have been established using Clifford algebras. We devise new algorithms based on these which involve operations on vectors and scalars only — a desirable characteristic for computations involving vectors of large dimension. As a consequence, we are able to form new expressions for the numerator and denominator polynomials of these approximants as products of vectors, thus retaining their Clifford nature

    A Comparison of Acceleration Techniques Applied to the SOR Method

    Get PDF
    In this paper we investigate the performance of four different SOR acceleration techniques on a variety of linear systems. These are the Dancis's accelerations, Wynn's epsilon algorithm and Graves-Morris's generalisation of Aitken's delta-squared algorithm. The experimental results show that these accelerations can reduce the amount of work required to obtain a solution and that their rates of convergence are generally less sensitive to the value of the relaxation parameter than the straightforward SOR method. Necessary conditions for the reduction in the computational work required for convergence are given for each of the accelerations, based on the number of floating-point operations. It is shown experimentally that the reduction in the number of iterations is related to the separation between the two largest eigenvalues of the SOR iteration matrix for a given omega. This separation influences the convergence of all the acceleration techniques above. Another important characteristic exhibited by these accelerations is that even if the number of iterations is not reduced significantly compared to the SOR method, they are competitive in terms of number of floating-point operations used and thus they reduce the overall computational workload

    Matrix iterative analysis and biorthogonality - Preface

    No full text

    The genesis and early developments of Aitken\u2019s process, Shanks\u2019 transformation, the \u3b5\u2013algorithm, and related fixed point methods

    No full text
    In this paper, we trace back the genesis of Aitken\u2019s \u3942 process and Shanks\u2019 sequence transformation. These methods, which are extrapolation methods, are used for accelerating the convergence of sequences of scalars, vectors, matrices, and tensors. They had, and still have, many important applications in numerical analysis and in applied mathematics. They are related to continued fractions and Pad\ue9 approximants. We go back to the roots of these methods and analyze the original contributions. New and detailed explanations on the building and properties of Shanks\u2019 transformation and its kernel are provided. We then review their historical algebraic and algorithmic developments. We also analyze how they were involved in the solution of systems of linear and nonlinear equations, in particular in the methods of Steffensen, Pulay, and Anderson. Testimonies by various actors of the domain are given. The paper can also serve as an introduction to this domain of numerical analysis

    The genesis and early developments of Aitken’s process, Shanks’ transformation, the ε–algorithm, and related fixed point methods

    No full text
    corecore