46 research outputs found

    Two-sided Grassmann-Rayleigh quotient iteration

    Full text link
    The two-sided Rayleigh quotient iteration proposed by Ostrowski computes a pair of corresponding left-right eigenvectors of a matrix CC. We propose a Grassmannian version of this iteration, i.e., its iterates are pairs of pp-dimensional subspaces instead of one-dimensional subspaces in the classical case. The new iteration generically converges locally cubically to the pairs of left-right pp-dimensional invariant subspaces of CC. Moreover, Grassmannian versions of the Rayleigh quotient iteration are given for the generalized Hermitian eigenproblem, the Hamiltonian eigenproblem and the skew-Hamiltonian eigenproblem.Comment: The text is identical to a manuscript that was submitted for publication on 19 April 200

    Lagrange Multipliers and Rayleigh Quotient Iteration in Constrained Type Equations

    Full text link
    We generalize the Rayleigh quotient iteration to a class of functions called vector Lagrangians. The convergence theorem we obtained generalizes classical and nonlinear Rayleigh quotient iterations, as well as iterations for tensor eigenpairs and constrained optimization. In the latter case, our generalized Rayleigh quotient is an estimate of the Lagrange multiplier. We discuss two methods of solving the updating equation associated with the iteration. One method leads to a generalization of Riemannian Newton method for embedded manifolds in a Euclidean space while the other leads to a generalization of the classical Rayleigh quotient formula. Applying to tensor eigenpairs, we obtain both an improvements over the state-of-the-art algorithm, and a new quadratically convergent algorithm to compute all complex eigenpairs of sizes typical in applications. We also obtain a Rayleigh-Chebyshev iteration with cubic convergence rate, and give a clear criterion for RQI to have cubic convergence rate, giving a common framework for existing algorithms

    Riemannian preconditioning

    Get PDF
    This paper exploits a basic connection between sequential quadratic programming and Riemannian gradient optimization to address the general question of selecting a metric in Riemannian optimization, in particular when the Riemannian structure is sought on a quotient manifold. The proposed method is shown to be particularly insightful and efficient in quadratic optimization with orthogonality and/or rank constraints, which covers most current applications of Riemannian optimization in matrix manifolds.Belgium Science Policy Office, FNRS (Belgium)This is the author accepted manuscript. The final version is available from The Society for Industrial and Applied Mathematics via http://dx.doi.org/10.1137/14097086

    Rayleigh quotient iteration and simplified Jacobi-Davidson method with preconditioned iterative solves

    Get PDF
    AbstractWe show that for the non-Hermitian eigenvalue problem simplified Jacobi–Davidson with preconditioned iterative solves is equivalent to inexact Rayleigh quotient iteration where the preconditioner is altered by a simple rank one change. This extends existing equivalence results to the case of preconditioned iterative solves. Numerical experiments are shown to agree with the theory

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm

    Glosarium Matematika

    Get PDF

    A Riemannian approach to large-scale constrained least-squares with symmetries

    Full text link
    This thesis deals with least-squares optimization on a manifold of equivalence relations, e.g., in the presence of symmetries which arise frequently in many applications. While least-squares cost functions remain a popular way to model large-scale problems, the additional symmetry constraint should be interpreted as a way to make the modeling robust. Two fundamental examples are the matrix completion problem, a least-squares problem with rank constraints and the generalized eigenvalue problem, a least-squares problem with orthogonality constraints. The possible large-scale nature of these problems demands to exploit the problem structure as much as possible in order to design numerically efficient algorithms. The constrained least-squares problems are tackled in the framework of Riemannian optimization that has gained much popularity in recent years because of the special nature of orthogonality and rank constraints that have particular symmetries. Previous work on Riemannian optimization has mostly focused on the search space, exploiting the differential geometry of the constraint but disregarding the role of the cost function. We, on the other hand, propose to take both cost and constraints into account to propose a tailored Riemannian geometry. This is achieved by proposing novel Riemannian metrics. To this end, we show a basic connection between sequential quadratic programming and Riemannian gradient optimization and address the general question of selecting a metric in Riemannian optimization. We revisit quadratic optimization problems with orthogonality and rank constraints by generalizing various existing methods, like power, inverse and Rayleigh quotient iterations, and proposing novel ones that empirically compete with state-of-the-art algorithms. Overall, this thesis deals with exploiting two fundamental structures, least-squares and symmetry, in nonlinear optimization

    A Riemannian approach to large-scale constrained least-squares with symmetries

    Get PDF
    This thesis deals with least-squares optimization on a manifold of equivalence relations, e.g., in the presence of symmetries which arise frequently in many applications. While least-squares cost functions remain a popular way to model large-scale problems, the additional symmetry constraint should be interpreted as a way to make the modeling robust. Two fundamental examples are the matrix completion problem, a least-squares problem with rank constraints and the generalized eigenvalue problem, a least-squares problem with orthogonality constraints. The possible large-scale nature of these problems demands to exploit the problem structure as much as possible in order to design numerically efficient algorithms. The constrained least-squares problems are tackled in the framework of Riemannian optimization that has gained much popularity in recent years because of the special nature of orthogonality and rank constraints that have particular symmetries. Previous work on Riemannian optimization has mostly focused on the search space, exploiting the differential geometry of the constraint but disregarding the role of the cost function. We, on the other hand, propose to take both cost and constraints into account to propose a tailored Riemannian geometry. This is achieved by proposing novel Riemannian metrics. To this end, we show a basic connection between sequential quadratic programming and Riemannian gradient optimization and address the general question of selecting a metric in Riemannian optimization. We revisit quadratic optimization problems with orthogonality and rank constraints by generalizing various existing methods, like power, inverse and Rayleigh quotient iterations, and proposing novel ones that empirically compete with state-of-the-art algorithms. Overall, this thesis deals with exploiting two fundamental structures, least-squares and symmetry, in nonlinear optimization

    Randomized Riemannian Preconditioning for Orthogonality Constrained Problems

    Get PDF
    Optimization problems with (generalized) orthogonality constraints are prevalent across science and engineering. For example, in computational science they arise in the symmetric (generalized) eigenvalue problem, in nonlinear eigenvalue problems, and in electronic structures computations, to name a few problems. In statistics and machine learning, they arise, for example, in canonical correlation analysis and in linear discriminant analysis. In this article, we consider using randomized preconditioning in the context of optimization problems with generalized orthogonality constraints. Our proposed algorithms are based on Riemannian optimization on the generalized Stiefel manifold equipped with a non-standard preconditioned geometry, which necessitates development of the geometric components necessary for developing algorithms based on this approach. Furthermore, we perform asymptotic convergence analysis of the preconditioned algorithms which help to characterize the quality of a given preconditioner using second-order information. Finally, for the problems of canonical correlation analysis and linear discriminant analysis, we develop randomized preconditioners along with corresponding bounds on the relevant condition number
    corecore