4,311 research outputs found

    A Canonical Form for Positive Definite Matrices

    Get PDF
    We exhibit an explicit, deterministic algorithm for finding a canonical form for a positive definite matrix under unimodular integral transformations. We use characteristic sets of short vectors and partition-backtracking graph software. The algorithm runs in a number of arithmetic operations that is exponential in the dimension nn, but it is practical and more efficient than canonical forms based on Minkowski reduction

    Penalized Orthogonal Iteration for Sparse Estimation of Generalized Eigenvalue Problem

    Full text link
    We propose a new algorithm for sparse estimation of eigenvectors in generalized eigenvalue problems (GEP). The GEP arises in a number of modern data-analytic situations and statistical methods, including principal component analysis (PCA), multiclass linear discriminant analysis (LDA), canonical correlation analysis (CCA), sufficient dimension reduction (SDR) and invariant co-ordinate selection. We propose to modify the standard generalized orthogonal iteration with a sparsity-inducing penalty for the eigenvectors. To achieve this goal, we generalize the equation-solving step of orthogonal iteration to a penalized convex optimization problem. The resulting algorithm, called penalized orthogonal iteration, provides accurate estimation of the true eigenspace, when it is sparse. Also proposed is a computationally more efficient alternative, which works well for PCA and LDA problems. Numerical studies reveal that the proposed algorithms are competitive, and that our tuning procedure works well. We demonstrate applications of the proposed algorithm to obtain sparse estimates for PCA, multiclass LDA, CCA and SDR. Supplementary materials are available online
    corecore