108 research outputs found

    Positive Definite 1\ell_1 Penalized Estimation of Large Covariance Matrices

    Full text link
    The thresholding covariance estimator has nice asymptotic properties for estimating sparse large covariance matrices, but it often has negative eigenvalues when used in real data analysis. To simultaneously achieve sparsity and positive definiteness, we develop a positive definite 1\ell_1-penalized covariance estimator for estimating sparse large covariance matrices. An efficient alternating direction method is derived to solve the challenging optimization problem and its convergence properties are established. Under weak regularity conditions, non-asymptotic statistical theory is also established for the proposed estimator. The competitive finite-sample performance of our proposal is demonstrated by both simulation and real applications.Comment: accepted by JASA, August 201

    An Extragradient-Based Alternating Direction Method for Convex Minimization

    Get PDF
    In this paper, we consider the problem of minimizing the sum of two convex functions subject to linear linking constraints. The classical alternating direction type methods usually assume that the two convex functions have relatively easy proximal mappings. However, many problems arising from statistics, image processing and other fields have the structure that while one of the two functions has easy proximal mapping, the other function is smoothly convex but does not have an easy proximal mapping. Therefore, the classical alternating direction methods cannot be applied. To deal with the difficulty, we propose in this paper an alternating direction method based on extragradients. Under the assumption that the smooth function has a Lipschitz continuous gradient, we prove that the proposed method returns an ϵ\epsilon-optimal solution within O(1/ϵ)O(1/\epsilon) iterations. We apply the proposed method to solve a new statistical model called fused logistic regression. Our numerical experiments show that the proposed method performs very well when solving the test problems. We also test the performance of the proposed method through solving the lasso problem arising from statistics and compare the result with several existing efficient solvers for this problem; the results are very encouraging indeed

    Alternating Direction Methods for Latent Variable Gaussian Graphical Model Selection

    Full text link
    Chandrasekaran, Parrilo and Willsky (2010) proposed a convex optimization problem to characterize graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this paper, we propose two alternating direction methods for solving this problem. The first method is to apply the classical alternating direction method of multipliers to solve the problem as a consensus problem. The second method is a proximal gradient based alternating direction method of multipliers. Our methods exploit and take advantage of the special structure of the problem and thus can solve large problems very efficiently. Global convergence result is established for the proposed methods. Numerical results on both synthetic data and gene expression data show that our methods usually solve problems with one million variables in one to two minutes, and are usually five to thirty five times faster than a state-of-the-art Newton-CG proximal point algorithm

    Iteration Complexity Analysis of Multi-Block ADMM for a Family of Convex Minimization without Strong Convexity

    Get PDF
    The alternating direction method of multipliers (ADMM) is widely used in solving structured convex optimization problems due to its superior practical performance. On the theoretical side however, a counterexample was shown in [7] indicating that the multi-block ADMM for minimizing the sum of NN (N3)(N\geq 3) convex functions with NN block variables linked by linear constraints may diverge. It is therefore of great interest to investigate further sufficient conditions on the input side which can guarantee convergence for the multi-block ADMM. The existing results typically require the strong convexity on parts of the objective. In this paper, we present convergence and convergence rate results for the multi-block ADMM applied to solve certain NN-block (N3)(N\geq 3) convex minimization problems without requiring strong convexity. Specifically, we prove the following two results: (1) the multi-block ADMM returns an ϵ\epsilon-optimal solution within O(1/ϵ2)O(1/\epsilon^2) iterations by solving an associated perturbation to the original problem; (2) the multi-block ADMM returns an ϵ\epsilon-optimal solution within O(1/ϵ)O(1/\epsilon) iterations when it is applied to solve a certain sharing problem, under the condition that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz property, which essentially covers most convex optimization models except for some pathological cases.Comment: arXiv admin note: text overlap with arXiv:1408.426

    Sparse Inverse Covariance Selection via Alternating Linearization Methods

    Full text link
    Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an 1\ell_1-regularization term. In this paper, we propose a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions. Moreover, our algorithm obtains an ϵ\epsilon-optimal solution in O(1/ϵ)O(1/\epsilon) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms