1,100 research outputs found

    Sparse Inverse Covariance Selection via Alternating Linearization Methods

    Full text link
    Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an β„“1\ell_1-regularization term. In this paper, we propose a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions. Moreover, our algorithm obtains an Ο΅\epsilon-optimal solution in O(1/Ο΅)O(1/\epsilon) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms

    L0 Sparse Inverse Covariance Estimation

    Full text link
    Recently, there has been focus on penalized log-likelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1l_1 norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting "norm" is the non-convex l0l_0 penalty but its lack of convexity has deterred its use in sparse maximum likelihood estimation. In this paper we consider non-convex l0l_0 penalized log-likelihood inverse covariance estimation and present a novel cyclic descent algorithm for its optimization. Convergence to a local minimizer is proved, which is highly non-trivial, and we demonstrate via simulations the reduced bias and superior quality of the l0l_0 penalty as compared to the l1l_1 penalty

    Alternating Direction Methods for Latent Variable Gaussian Graphical Model Selection

    Full text link
    Chandrasekaran, Parrilo and Willsky (2010) proposed a convex optimization problem to characterize graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this paper, we propose two alternating direction methods for solving this problem. The first method is to apply the classical alternating direction method of multipliers to solve the problem as a consensus problem. The second method is a proximal gradient based alternating direction method of multipliers. Our methods exploit and take advantage of the special structure of the problem and thus can solve large problems very efficiently. Global convergence result is established for the proposed methods. Numerical results on both synthetic data and gene expression data show that our methods usually solve problems with one million variables in one to two minutes, and are usually five to thirty five times faster than a state-of-the-art Newton-CG proximal point algorithm

    PRISMA: PRoximal Iterative SMoothing Algorithm

    Full text link
    Motivated by learning problems including max-norm regularized matrix completion and clustering, robust PCA and sparse inverse covariance selection, we propose a novel optimization algorithm for minimizing a convex objective which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz part, and a simple non-smooth non-Lipschitz part. We use a time variant smoothing strategy that allows us to obtain a guarantee that does not depend on knowing in advance the total number of iterations nor a bound on the domain

    An Extragradient-Based Alternating Direction Method for Convex Minimization

    Get PDF
    In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints. The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings. However, many problems arising from statistics, image processing and other fields have the structure that while one of the two functions has easy proximal mapping, the other function is smoothly convex but does not have an easy proximal mapping. Therefore, the classical alternating direction methods cannot be applied. To deal with the difficulty, we propose in this paper an alternating direction method based on extragradients. Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an Ο΅\epsilon-optimal solution within O(1/Ο΅)O(1/\epsilon) iterations. We apply the proposed method to solve a new statistical model called fused logistic regression. Our numerical experiments show that the proposed method performs very well when solving the test problems. We also test the performance of the proposed method through solving the lasso problem arising from statistics and compare the result with several existing efficient solvers for this problem; the results are very encouraging indeed

    Positive Definite β„“1\ell_1 Penalized Estimation of Large Covariance Matrices

    Full text link
    The thresholding covariance estimator has nice asymptotic properties for estimating sparse large covariance matrices, but it often has negative eigenvalues when used in real data analysis. To simultaneously achieve sparsity and positive definiteness, we develop a positive definite β„“1\ell_1-penalized covariance estimator for estimating sparse large covariance matrices. An efficient alternating direction method is derived to solve the challenging optimization problem and its convergence properties are established. Under weak regularity conditions, non-asymptotic statistical theory is also established for the proposed estimator. The competitive finite-sample performance of our proposal is demonstrated by both simulation and real applications.Comment: accepted by JASA, August 201
    • …
    corecore