90 research outputs found

    Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

    Get PDF
    Several recent methods used to analyze asymptotic stability of delay-differential equations (DDEs) involve determining the eigenvalues of a matrix, a matrix pencil or a matrix polynomial constructed by Kronecker products. Despite some similarities between the different types of these so-called matrix pencil methods, the general ideas used as well as the proofs differ considerably. Moreover, the available theory hardly reveals the relations between the different methods. In this work, a different derivation of various matrix pencil methods is presented using a unifying framework of a new type of eigenvalue problem: the polynomial two-parameter eigenvalue problem, of which the quadratic two-parameter eigenvalue problem is a special case. This framework makes it possible to establish relations between various seemingly different methods and provides further insight in the theory of matrix pencil methods. We also recognize a few new matrix pencil variants to determine DDE stability. Finally, the recognition of the new types of eigenvalue problem opens a door to efficient computation of DDE stability

    Roots of bivariate polynomial systems via determinantal representations

    Get PDF
    We give two determinantal representations for a bivariate polynomial. They may be used to compute the zeros of a system of two of these polynomials via the eigenvalues of a two-parameter eigenvalue problem. The first determinantal representation is suitable for polynomials with scalar or matrix coefficients, and consists of matrices with asymptotic order n2/4n^2/4, where nn is the degree of the polynomial. The second representation is useful for scalar polynomials and has asymptotic order n2/6n^2/6. The resulting method to compute the roots of a system of two bivariate polynomials is competitive with some existing methods for polynomials up to degree 10, as well as for polynomials with a small number of terms.Comment: 22 pages, 9 figure

    Fractional regularization matrices for linear discrete ill-posed problems

    Get PDF
    The numerical solution of linear discrete ill-posed problems typically requires regularization. Two of the most popular regularization methods are due to Tikhonov and Lavrentiev. These methods require the choice of a regularization matrix. Common choices include the identity matrix and finite difference approximations of a derivative operator. It is the purpose of the present paper to explore the use of fractional powers of the matrices {Mathematical expression} (for Tikhonov regularization) and A (for Lavrentiev regularization) as regularization matrices, where A is the matrix that defines the linear discrete ill-posed problem. Both small- and large-scale problems are considered. © 2013 Springer Science+Business Media Dordrecht

    A homogeneous Rayleigh quotient with applications in gradient methods

    Get PDF
    Given an approximate eigenvector, its (standard) Rayleigh quotient and harmonic Rayleigh quotient are two well-known approximations of the corresponding eigenvalue. We propose a new type of Rayleigh quotient, the homogeneous Rayleigh quotient, and analyze its sensitivity with respect to perturbations in the eigenvector. Furthermore, we study the inverse of this homogeneous Rayleigh quotient as stepsize for the gradient method for unconstrained optimization.The notion and basic properties are also extended to the generalized eigenvalue problem

    Limited memory gradient methods for unconstrained optimization

    Full text link
    The limited memory steepest descent method (Fletcher, 2012) for unconstrained optimization problems stores a few past gradients to compute multiple stepsizes at once. We review this method and propose new variants. For strictly convex quadratic objective functions, we study the numerical behavior of different techniques to compute new stepsizes. In particular, we introduce a method to improve the use of harmonic Ritz values. We also show the existence of a secant condition associated with LMSD, where the approximating Hessian is projected onto a low-dimensional space. In the general nonlinear case, we propose two new alternatives to Fletcher's method: first, the addition of symmetry constraints to the secant condition valid for the quadratic case; second, a perturbation of the last differences between consecutive gradients, to satisfy multiple secant equations simultaneously. We show that Fletcher's method can also be interpreted from this viewpoint

    A homogeneous Rayleigh quotient with applications in gradient methods

    Full text link
    Given an approximate eigenvector, its (standard) Rayleigh quotient and harmonic Rayleigh quotient are two well-known approximations of the corresponding eigenvalue. We propose a new type of Rayleigh quotient, the homogeneous Rayleigh quotient, and analyze its sensitivity with respect to perturbations in the eigenvector. Furthermore, we study the inverse of this homogeneous Rayleigh quotient as stepsize for the gradient method for unconstrained optimization. The notion and basic properties are also extended to the generalized eigenvalue problem

    Block Discrete Empirical Interpolation Methods

    Full text link
    We present two block variants of the discrete empirical interpolation method (DEIM); as a particular application, we will consider a CUR factorization. The block DEIM algorithms are based on the rank-revealing QR factorization and the concept of the maximum volume of submatrices. We also present a version of the block DEIM procedures, which allows for adaptive choice of block size. Experiments demonstrate that the block DEIM algorithms may provide a better low-rank approximation, and may also be computationally more efficient than the standard DEIM procedure

    RSVD-CUR Decomposition for Matrix Triplets

    Full text link
    We propose a restricted SVD based CUR (RSVD-CUR) decomposition for matrix triplets (A,B,G)(A, B, G). Given matrices AA, BB, and GG of compatible dimensions, such a decomposition provides a coordinated low-rank approximation of the three matrices using a subset of their rows and columns. We pick the subset of rows and columns of the original matrices by applying either the discrete empirical interpolation method (DEIM) or the L-DEIM scheme on the orthogonal and nonsingular matrices from the restricted singular value decomposition of the matrix triplet. We investigate the connections between a DEIM type RSVD-CUR approximation and a DEIM type CUR factorization, and a DEIM type generalized CUR decomposition. We provide an error analysis that shows that the accuracy of the proposed RSVD-CUR decomposition is within a factor of the approximation error of the restricted singular value decomposition of given matrices. An RSVD-CUR factorization may be suitable for applications where we are interested in approximating one data matrix relative to two other given matrices. Two applications that we discuss include multi-view/label dimension reduction, and data perturbation problems of the form AE=A+BFGA_E=A + BFG, where BFGBFG is a nonwhite noise matrix. In numerical experiments, we show the advantages of the new method over the standard CUR approximation for these applications

    A Jacobi-Davidson type method for the product eigenvalue problem

    Get PDF
    Abstract. We propose a Jacobi-Davidson type method to compute selected eigenpairs of the product eigenvalue problem Am · · · A1x = λx, where the matrices may be large and sparse. To avoid difficulties caused by a high condition number of the product matrix, we split up the action of the product matrix and work with several search spaces. We generalize the Jacobi-Davidson correction equation and the harmonic and refined extraction for the product eigenvalue problem. Numerical experiments indicate that the method can be used to compute eigenvalues of product matrices with extremely high condition numbers
    • …
    corecore