13 research outputs found

    When is a matrix unitary or Hermitian plus low rank?

    No full text
    Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing complexity once the matrix has been reduced, for instance, to tridiagonal or Hessenberg form. Recently, fast and reliable eigensolvers dealing with low-rank perturbations of unitary and Hermitian matrices have been proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, for example, the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of a Hermitian or unitary matrix plus a low-rank perturbation. In this paper, we give necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low-rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Then, based on these conditions, we identify the closest Hermitian or unitary plus rank k matrix to a given matrix A, in Frobenius and spectral norm, and give a formula for their distance from A. Finally, we present a practical iteration to detect the low-rank perturbation. Numerical tests prove that this straightforward algorithm is effective

    Chopping a Chebyshev series

    No full text
    Chebfun and related software projects for numerical computing with functions are based on the idea that at each step of a computation, a function f(x) defined on an interval [a,b] is "rounded" to a prescribed precision by constructing a Chebyshev series and chopping it at an appropriate point. Designing a chopping algorithm with the right properties proves to be a surprisingly complex and interesting problem. We describe the chopping algorithm introduced in Chebfun Version 5.3 in 2015 after many years of discussion and the considerations that led to this design

    Block operators and spectral discretizations

    No full text
    Every student of numerical linear algebra is familiar with block matrices and vectors. The same ideas can be applied to the continuous analogues of operators, functions, and functionals. It is shown here how the explicit consideration of block structures at the continuous level can be a useful tool. In particular, block operator diagrams lead to templates for spectral discretization of differential and integral equation boundary-value problems in one space dimension by the rectangular differentiation, identity, and integration matrices introduced recently by Driscoll and Hale. The templates are so simple that we are able to present them as executable Matlab codes just a few lines long, developing ideas through a sequence of 12 increasingly advanced examples. The notion of the rectangular shape of a linear operator is made mathematically precise by the theory of Fredholm operators and their indices, and the block operator formulations apply to nonlinear problems too. We propose the convention of representing nonlinear blocks as shaded. At each step of a Newton iteration for a nonlinear problem, the structure is linearized and the blocks become unshaded, representing Fréchet derivative operators, square or rectangular

    Block operators and spectral discretizations

    No full text
    Every student of numerical linear algebra is familiar with block matrices and vectors. The same ideas can be applied to the continuous analogues of operators, functions, and functionals. It is shown here how the explicit consideration of block structures at the continuous level can be a useful tool. In particular, block operator diagrams lead to templates for spectral discretization of differential and integral equation boundary-value problems in one space dimension by the rectangular differentiation, identity, and integration matrices introduced recently by Driscoll and Hale. The templates are so simple that we are able to present them as executable Matlab codes just a few lines long, developing ideas through a sequence of 12 increasingly advanced examples. The notion of the rectangular shape of a linear operator is made mathematically precise by the theory of Fredholm operators and their indices, and the block operator formulations apply to nonlinear problems too. We propose the convention of representing nonlinear blocks as shaded. At each step of a Newton iteration for a nonlinear problem, the structure is linearized and the blocks become unshaded, representing Fréchet derivative operators, square or rectangular

    Chopping a Chebyshev series

    No full text
    Chebfun and related software projects for numerical computing with functions are based on the idea that at each step of a computation, a function f(x) defined on an interval [a,b] is "rounded" to a prescribed precision by constructing a Chebyshev series and chopping it at an appropriate point. Designing a chopping algorithm with the right properties proves to be a surprisingly complex and interesting problem. We describe the chopping algorithm introduced in Chebfun Version 5.3 in 2015 after many years of discussion and the considerations that led to this design

    Factoring block Fiedler Companion Matrices

    No full text
    When Fiedler published his “A note on Companion matrices” in 2003 on Linear Algebra and its Applications, he could not have foreseen the significance of this elegant factorization of a companion matrix into essentially two-by-two Gaussian transformations, which we will name (scalar) elementary Fiedler factors. Since then, researchers extended these results and studied the various resulting linearizations, the stability of Fiedler companion matrices, factorizations of block companion matrices, Fiedler pencils, and even looked at extensions to non-monomial bases. In this chapter, we introduce a new way to factor block Fiedler companion matrices into the product of scalar elementary Fiedler factors. We use this theory to prove that, e.g. a block (Fiedler) companion matrix can always be written as the product of several scalar (Fiedler) companion matrices. We demonstrate that this factorization in terms of elementary Fiedler factors can be used to construct new linearizations. Some linearizations have notable properties, such as low bandwidth, or allow for factoring the coefficient matrices into unitary-plus-low-rank matrices. Moreover, we will provide bounds on the low-rank parts of the resulting unitary-plus-low-rank decomposition. To present these results in an easy-to-understand manner, we rely on the flow-graph representation for Fiedler matrices recently proposed by Del Corso and Poloni in Linear Algebra and its Applications, 2017

    Connecting optimization with spectral analysis of tri-diagonal matrices

    No full text
    International audienceWe show that the global minimum (resp. maximum) of a continuous function on a compact set can be approximated from above (resp. from below) by computing the smallest (rest. largest) eigenvalue of a hierarchy of (r Ă— r) tri-diagonal univariate moment matrices of increasing size. Equivalently it reduces to computing the smallest (resp. largest) root of a certain univariate degree-r orthonormal polynomial. This provides a strong connection between the fields of optimization, orthogonal polynomials, numerical analysis and linear algebra, via asymptotic spectral analysis of tri-diagonal symmetric matrices
    corecore