132 research outputs found

    Polynomial eigenvalue solver based on tropically scaled Lagrange linearization

    Get PDF
    We propose an algorithm to solve polynomial eigenvalue problems via linearization combining several ingredients: a specific choice of linearization, which is constructed using input from tropical algebra and the notion of well-separated tropical roots, an appropriate scaling applied to the linearization and a modified stopping criterion for the QZQZ iterations that takes advantage of the properties of our scaled linearization. Numerical experiments suggest that our polynomial eigensolver computes all the finite and well-conditioned eigenvalues to high relative accuracy even when they are very different in magnitude.status: publishe

    Deflating quadratic matrix polynomials with structure preserving transformations

    Get PDF
    AbstractGiven a pair of distinct eigenvalues (λ1,λ2) of an n×n quadratic matrix polynomial Q(λ) with nonsingular leading coefficient and their corresponding eigenvectors, we show how to transform Q(λ) into a quadratic of the form Qd(λ)00q(λ) having the same eigenvalue s as Q(λ), with Qd(λ) an (n-1)×(n-1) quadratic matrix polynomial and q(λ) a scalar quadratic polynomial with roots λ1 and λ2. This block diagonalization cannot be achieved by a similarity transformation applied directly to Q(λ) unless the eigenvectors corresponding to λ1 and λ2 are parallel. We identify conditions under which we can construct a family of 2n×2n elementary similarity transformations that (a) are rank-two modifications of the identity matrix, (b) act on linearizations of Q(λ), (c) preserve the block structure of a large class of block symmetric linearizations of Q(λ), thereby defining new quadratic matrix polynomials Q1(λ) that have the same eigenvalue s as Q(λ), (d) yield quadratics Q1(λ) with the property that their eigenvectors associated with λ1 and λ2 are parallel and hence can subsequently be deflated by a similarity applied directly to Q1(λ). This is the first attempt at building elementary transformations that preserve the block structure of widely used linearizations and which have a specific action

    The role of topology and mechanics in uniaxially growing cell networks

    Get PDF
    In biological systems, the growth of cells, tissues, and organs is influenced by mechanical cues. Locally, cell growth leads to a mechanically heterogeneous environment as cells pull and push their neighbors in a cell network. Despite this local heterogeneity, at the tissue level, the cell network is remarkably robust, as it is not easily perturbed by changes in the mechanical environment or the network connectivity. Through a network model, we relate global tissue structure (i.e. the cell network topology) and local growth mechanisms (growth laws) to the overall tissue response. Within this framework, we investigate the two main mechanical growth laws that have been proposed: stress-driven or strain-driven growth. We show that in order to create a robust and stable tissue environment, networks with predominantly series connections are naturally driven by stress-driven growth, whereas networks with predominantly parallel connections are associated with strain-driven growth

    Triangularizing Quadratic Matrix Polynomials

    Full text link

    Newton's Method in Floating Point Arithmetic and Iterative Refinement of Generalized Eigenvalue Problems

    No full text
    We examine the behavior of Newton's method in floating point arithmetic, allowing for extended precision in computation of the residual, inaccurate evaluation of the Jacobian and unstable solution of the linear systems. We bound the limiting accuracy and the smallest norm of the residual. The application that motivates this work is iterative refinement for the generalized eigenvalue problem. We show that iterative refinement by Newton's method can be used to improve the forward and backward errors of computed eigenpairs

    Stability of Structured Hamiltonian Eigensolvers

    No full text
    Various applications give rise to eigenvalue problems for which the matrices are Hamiltonian or skew-Hamiltonian and also symmetric or skew-symmetric. We define structured backward errors that are useful for testing the stability of numerical methods for the solution of these four classes of structured eigenproblems. We introduce the symplectic quasi-QR factorization and show that for three of the classes it enables the structured backward error to be efficiently computed. We also give a detailed rounding error analysis of some recently developed Jacobi-like algorithms of Fabender, Mackey and Mackey for these eigenproblems. Based on the direct solution of 4 \Theta 4, and in one case 8 \Theta 8, structured subproblems these algorithms produce a complete basis of symplectic orthogonal eigenvectors for the two symmetric cases and a symplectic orthogonal basis for all the real invariant subspaces for the two skew-symmetric cases. We prove that, when the rotations are implemented using suitable formulae, the algorithms are strongly backward stable and we show that the QR algorithm does not have this desirable property

    Tridiagonal-diagonal reduction of symmetric indefinite pairs

    No full text
    We consider the reduction of a symmetric indefinite matrix pair (A, B), with B nonsingular, to tridiagonal-diagonal form by congruence transformations. This is an important reduction in solving polynomial eigenvalue problems with symmetric coefficient matrices and in frequency response computations. The pair is first reduced to symmetric-diagonal form. We describe three methods for reducing the symmetric-diagonal pair to tridiagonal-diagonal form. Two of them employ more stable versions of Brebner and Grad’s pseudosymmetric Givens and pseudosymmetric Householder reductions, while the third is new and based on a combination of Householder reflectors and hyperbolic rotations. We prove an optimality condition for the transformations used in the third reduction. We present numerical experiments that compare the different approaches and show improvements over Brebner and Grad’s reductions

    Parallel Implementation of the Yau and Lu Method for Eigenvalue Computation

    No full text
    In this paper, parallel extensions of a complete symmetric eigensolver, proposed by Yau and Lu in 1993, are pre sented. First, an overview of this invariant subspace decomposition method for dense symmetric matrices is given, followed by numerical results. Then, works are exposed in progress on distributed-memory implementa tion. The algorithm's heavy reliance on matrix-matrix mul tiplication, coupled with Fast Fourier Transform (FFT), should yield a highly parallelizable algorithm. Finally, performance results for the dominant computation kernel on the Intel Paragon are presented
    corecore