18,164 research outputs found

    Scalable sparse covariance estimation via self-concordance

    Get PDF
    We consider the class of convex minimization problems, composed of a self-concordant function, such as the logdet\log\det metric, a convex data fidelity term h()h(\cdot) and, a regularizing -- possibly non-smooth -- function g()g(\cdot). This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this \emph{locally} Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.Comment: 7 pages, 1 figure, Accepted at AAAI-1

    Regularized Principal Component Analysis for Spatial Data

    Full text link
    In many atmospheric and earth sciences, it is of interest to identify dominant spatial patterns of variation based on data observed at pp locations and nn time points with the possibility that p>np>n. While principal component analysis (PCA) is commonly applied to find the dominant patterns, the eigenimages produced from PCA may exhibit patterns that are too noisy to be physically meaningful when pp is large relative to nn. To obtain more precise estimates of eigenimages, we propose a regularization approach incorporating smoothness and sparseness of eigenimages, while accounting for their orthogonality. Our method allows data taken at irregularly spaced or sparse locations. In addition, the resulting optimization problem can be solved using the alternating direction method of multipliers, which is easy to implement, and applicable to a large spatial dataset. Furthermore, the estimated eigenfunctions provide a natural basis for representing the underlying spatial process in a spatial random-effects model, from which spatial covariance function estimation and spatial prediction can be efficiently performed using a regularized fixed-rank kriging method. Finally, the effectiveness of the proposed method is demonstrated by several numerical example

    Sparse Inverse Covariance Selection via Alternating Linearization Methods

    Full text link
    Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an 1\ell_1-regularization term. In this paper, we propose a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions. Moreover, our algorithm obtains an ϵ\epsilon-optimal solution in O(1/ϵ)O(1/\epsilon) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms
    corecore