19 research outputs found

    Rayleigh-Ritz majorization error bounds of the mixed type

    Full text link
    The absolute change in the Rayleigh quotient (RQ) for a Hermitian matrix with respect to vectors is bounded in terms of the norms of the residual vectors and the angle between vectors in [\doi{10.1137/120884468}]. We substitute multidimensional subspaces for the vectors and derive new bounds of absolute changes of eigenvalues of the matrix RQ in terms of singular values of residual matrices and principal angles between subspaces, using majorization. We show how our results relate to bounds for eigenvalues after discarding off-diagonal blocks or additive perturbations.Comment: 20 pages, 1 figure. Accepted to SIAM Journal on Matrix Analysis and Application

    Bounds on changes in Ritz values for a perturbed invariant subspace of a Hermitian matrix

    Full text link
    The Rayleigh-Ritz method is widely used for eigenvalue approximation. Given a matrix XX with columns that form an orthonormal basis for a subspace \X, and a Hermitian matrix AA, the eigenvalues of XHAXX^HAX are called Ritz values of AA with respect to \X. If the subspace \X is AA-invariant then the Ritz values are some of the eigenvalues of AA. If the AA-invariant subspace \X is perturbed to give rise to another subspace \Y, then the vector of absolute values of changes in Ritz values of AA represents the absolute eigenvalue approximation error using \Y. We bound the error in terms of principal angles between \X and \Y. We capitalize on ideas from a recent paper [DOI: 10.1137/060649070] by A. Knyazev and M. Argentati, where the vector of absolute values of differences between Ritz values for subspaces \X and \Y was weakly (sub-)majorized by a constant times the sine of the vector of principal angles between \X and \Y, the constant being the spread of the spectrum of AA. In that result no assumption was made on either subspace being AA-invariant. It was conjectured there that if one of the trial subspaces is AA-invariant then an analogous weak majorization bound should only involve terms of the order of sine squared. Here we confirm this conjecture. Specifically we prove that the absolute eigenvalue error is weakly majorized by a constant times the sine squared of the vector of principal angles between the subspaces \X and \Y, where the constant is proportional to the spread of the spectrum of AA. For many practical cases we show that the proportionality factor is simply one, and that this bound is sharp. For the general case we can only prove the result with a slightly larger constant, which we believe is artificial.Comment: 12 pages. Accepted to SIAM Journal on Matrix Analysis and Applications (SIMAX

    Angles between subspaces and their tangents

    Full text link
    Principal angles between subspaces (PABS) (also called canonical angles) serve as a classical tool in mathematics, statistics, and applications, e.g., data mining. Traditionally, PABS are introduced via their cosines. The cosines and sines of PABS are commonly defined using the singular value decomposition. We utilize the same idea for the tangents, i.e., explicitly construct matrices, such that their singular values are equal to the tangents of PABS, using several approaches: orthonormal and non-orthonormal bases for subspaces, as well as projectors. Such a construction has applications, e.g., in analysis of convergence of subspace iterations for eigenvalue problems.Comment: 15 pages, 1 figure, 2 tables. Accepted to Journal of Numerical Mathematic

    Angles Between Infinite Dimensional Subspaces with Applications to the Rayleigh-Ritz and Alternating Projectors Methods

    Get PDF
    We define angles from-to and between infinite dimensional subspaces of a Hilbert space, inspired by the work of E. J. Hannan, 1961/1962 for general canonical correlations of stochastic processes. The spectral theory of selfadjoint operators is used to investigate the properties of the angles, e.g., to establish connections between the angles corresponding to orthogonal complements. The classical gaps and angles of Dixmier and Friedrichs are characterized in terms of the angles. We introduce principal invariant subspaces and prove that they are connected by an isometry that appears in the polar decomposition of the product of corresponding orthogonal projectors. Point angles are defined by analogy with the point operator spectrum. We bound the Hausdorff distance between the sets of the squared cosines of the angles corresponding to the original subspaces and their perturbations. We show that the squared cosines of the angles from one subspace to another can be interpreted as Ritz values in the Rayleigh-Ritz method, where the former subspace serves as a trial subspace and the orthogonal projector of the latter subspace serves as an operator in the Rayleigh-Ritz method. The Hausdorff distance between the Ritz values, corresponding to different trial subspaces, is shown to be bounded by a constant times the gap between the trial subspaces. We prove a similar eigenvalue perturbation bound that involves the gap squared. Finally, we consider the classical alternating projectors method and propose its ultimate acceleration, using the conjugate gradient approach. The corresponding convergence rate estimate is obtained in terms of the angles. We illustrate a possible acceleration for the domain decomposition method with a small overlap for the 1D diffusion equation.Comment: 22 pages. Accepted to Journal of Functional Analysi

    Angles, Majorization, Wielandt Inequality and Applications

    Get PDF
    In this thesis we revisit two classical definitions of angle in an inner product space: real-part angle and Hermitian angle. Special attention is paid to Krein’s inequality and its analogue. Some applications are given, leading to a simple proof of a basic lemma for a trace inequality of unitary matrices and also its extension. A brief survey on recent results of angles between subspaces is presented. This naturally brings us to the world of majorization. After introducing the notion of majorization, we present some classical as well as recent results on eigenvalue majorization. Several new norm inequalities are derived by making use of a powerful decomposition lemma for positive semidefinite matrices. We also consider coneigenvalue majorization. Some discussion on the possible generalization of the majorization bounds for Ritz values is presented. We then turn to a basic notion in convex analysis, the Legendre-Fenchel conjugate. The convexity of a function is important in finding the explicit expression of the transform for certain functions. A sufficient convexity condition is given for the product of positive definite quadratic forms. When the number of quadratic forms is two, the condition is also necessary. The condition is in terms of the condition number of the underlying matrices. The key lemma in our derivation is found to have some connection with the generalized Wielandt inequality. A new inequality between angles in inner product spaces is formulated and proved. This leads directly to a concise statement and proof of the generalized Wielandt inequality, including a simple description of all cases of equality. As a consequence, several recent results in matrix analysis and inner product spaces are improved

    Author index to volumes 301–400

    Get PDF

    Computational methods for large-scale inverse problems:a survey on hybrid projection methods

    Get PDF
    This paper surveys animportant class of methods that combine iterative projection methods and variational regularization methods for large-scale inverse problems. Iterative methods such as Krylov subspace methods are invaluable in the numerical linear algebra community and have proved important in solving inverse problems due to their inherent regularizing properties and their ability to handle large-scale problems. Variational regularization describes abroad and important class of methods that are used to obtain reliable solutions to inverse problems, whereby one solves a modified problem that incorporates prior knowledge. Hybrid projection methods combine iterative projection methods with variational regularization techniques in a synergistic way, providing researchers with a powerful computational framework for solving very large inverse problems. Although the idea of a hybrid Krylov method for linear inverse problems goes back to the 1980s, several recent advances on new regularization frameworks and methodologies have made this field ripe for extensions, further analyses, and new applications. In this paper, we provide a practical and accessible introduction to hybrid projection methods in the context of solving large (linear) inverse problems
    corecore