7,831 research outputs found

    Penalized Orthogonal Iteration for Sparse Estimation of Generalized Eigenvalue Problem

    Full text link
    We propose a new algorithm for sparse estimation of eigenvectors in generalized eigenvalue problems (GEP). The GEP arises in a number of modern data-analytic situations and statistical methods, including principal component analysis (PCA), multiclass linear discriminant analysis (LDA), canonical correlation analysis (CCA), sufficient dimension reduction (SDR) and invariant co-ordinate selection. We propose to modify the standard generalized orthogonal iteration with a sparsity-inducing penalty for the eigenvectors. To achieve this goal, we generalize the equation-solving step of orthogonal iteration to a penalized convex optimization problem. The resulting algorithm, called penalized orthogonal iteration, provides accurate estimation of the true eigenspace, when it is sparse. Also proposed is a computationally more efficient alternative, which works well for PCA and LDA problems. Numerical studies reveal that the proposed algorithms are competitive, and that our tuning procedure works well. We demonstrate applications of the proposed algorithm to obtain sparse estimates for PCA, multiclass LDA, CCA and SDR. Supplementary materials are available online

    The Graphical Lasso: New Insights and Alternatives

    Full text link
    The graphical lasso \citep{FHT2007a} is an algorithm for learning the structure in an undirected Gaussian graphical model, using β„“1\ell_1 regularization to control the number of zeros in the precision matrix {\B\Theta}={\B\Sigma}^{-1} \citep{BGA2008,yuan_lin_07}. The {\texttt R} package \GL\ \citep{FHT2007a} is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of \GL\ can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform \GL. By studying the "normal equations" we see that, \GL\ is solving the {\em dual} of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in \cite{BGA2008}. In this dual, the target of estimation is \B\Sigma, the covariance matrix, rather than the precision matrix \B\Theta. We propose similar primal algorithms \PGL\ and \DPGL, that also operate by block-coordinate descent, where \B\Theta is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that \DPGL\ is superior from several points of view.Comment: This is a revised version of our previous manuscript with the same name ArXiv id: http://arxiv.org/abs/1111.547

    L0 Sparse Inverse Covariance Estimation

    Full text link
    Recently, there has been focus on penalized log-likelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1l_1 norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting "norm" is the non-convex l0l_0 penalty but its lack of convexity has deterred its use in sparse maximum likelihood estimation. In this paper we consider non-convex l0l_0 penalized log-likelihood inverse covariance estimation and present a novel cyclic descent algorithm for its optimization. Convergence to a local minimizer is proved, which is highly non-trivial, and we demonstrate via simulations the reduced bias and superior quality of the l0l_0 penalty as compared to the l1l_1 penalty

    Sparse inverse covariance estimation with the lasso

    Full text link
    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm that is remarkably fast: in the worst cases, it solves a 1000 node problem (~500,000 parameters) in about a minute, and is 50 to 2000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinhausen and Buhlmann (2006). We illustrate the method on some cell-signaling data from proteomics.Comment: submitte
    • …
    corecore