19,519 research outputs found

    Deterministic Versus Randomized Kaczmarz Iterative Projection

    Get PDF
    Kaczmarz's alternating projection method has been widely used for solving a consistent (mostly over-determined) linear system of equations Ax=b. Because of its simple iterative nature with light computation, this method was successfully applied in computerized tomography. Since tomography generates a matrix A with highly coherent rows, randomized Kaczmarz algorithm is expected to provide faster convergence as it picks a row for each iteration at random, based on a certain probability distribution. It was recently shown that picking a row at random, proportional with its norm, makes the iteration converge exponentially in expectation with a decay constant that depends on the scaled condition number of A and not the number of equations. Since Kaczmarz's method is a subspace projection method, the convergence rate for simple Kaczmarz algorithm was developed in terms of subspace angles. This paper provides analyses of simple and randomized Kaczmarz algorithms and explain the link between them. It also propose new versions of randomization that may speed up convergence

    Scalability Analysis of Parallel GMRES Implementations

    Get PDF
    Applications involving large sparse nonsymmetric linear systems encourage parallel implementations of robust iterative solution methods, such as GMRES(k). Two parallel versions of GMRES(k) based on different data distributions and using Householder reflections in the orthogonalization phase, and variations of these which adapt the restart value k, are analyzed with respect to scalability (their ability to maintain fixed efficiency with an increase in problem size and number of processors).A theoretical algorithm-machine model for scalability is derived and validated by experiments on three parallel computers, each with different machine characteristics

    Computing a partial Schur factorization of nonlinear eigenvalue problems using the infinite Arnoldi method

    Full text link
    The partial Schur factorization can be used to represent several eigenpairs of a matrix in a numerically robust way. Different adaptions of the Arnoldi method are often used to compute partial Schur factorizations. We propose here a technique to compute a partial Schur factorization of a nonlinear eigenvalue problem (NEP). The technique is inspired by the algorithm in [8], now called the infinite Arnoldi method. The infinite Arnoldi method is a method designed for NEPs, and can be interpreted as Arnoldi's method applied to a linear infinite-dimensional operator, whose reciprocal eigenvalues are the solutions to the NEP. As a first result we show that the invariant pairs of the operator are equivalent to invariant pairs of the NEP. We characterize the structure of the invariant pairs of the operator and show how one can carry out a modification of the infinite Arnoldi method by respecting the structure. This also allows us to naturally add the feature known as locking. We nest this algorithm with an outer iteration, where the infinite Arnoldi method for a particular type of structured functions is appropriately restarted. The restarting exploits the structure and is inspired by the well-known implicitly restarted Arnoldi method for standard eigenvalue problems. The final algorithm is applied to examples from a benchmark collection, showing that both processing time and memory consumption can be considerably reduced with the restarting technique

    A numerical comparison of solvers for large-scale, continuous-time algebraic Riccati equations and LQR problems

    Full text link
    In this paper, we discuss numerical methods for solving large-scale continuous-time algebraic Riccati equations. These methods have been the focus of intensive research in recent years, and significant progress has been made in both the theoretical understanding and efficient implementation of various competing algorithms. There are several goals of this manuscript: first, to gather in one place an overview of different approaches for solving large-scale Riccati equations, and to point to the recent advances in each of them. Second, to analyze and compare the main computational ingredients of these algorithms, to detect their strong points and their potential bottlenecks. And finally, to compare the effective implementations of all methods on a set of relevant benchmark examples, giving an indication of their relative performance

    Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP

    Get PDF
    The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix
    corecore