8 research outputs found

    On the Optimality of Averaging in Distributed Statistical Learning

    Full text link
    A common approach to statistical learning with big-data is to randomly split it among mm machines and learn the parameter of interest by averaging the mm individual estimates. In this paper, focusing on empirical risk minimization, or equivalently M-estimation, we study the statistical error incurred by this strategy. We consider two large-sample settings: First, a classical setting where the number of parameters pp is fixed, and the number of samples per machine nn\to\infty. Second, a high-dimensional regime where both p,np,n\to\infty with p/nκ(0,1)p/n \to \kappa \in (0,1). For both regimes and under suitable assumptions, we present asymptotically exact expressions for this estimation error. In the fixed-pp setting, under suitable assumptions, we prove that to leading order averaging is as accurate as the centralized solution. We also derive the second order error terms, and show that these can be non-negligible, notably for non-linear models. The high-dimensional setting, in contrast, exhibits a qualitatively different behavior: data splitting incurs a first-order accuracy loss, which to leading order increases linearly with the number of machines. The dependence of our error approximations on the number of machines traces an interesting accuracy-complexity tradeoff, allowing the practitioner an informed choice on the number of machines to deploy. Finally, we confirm our theoretical analysis with several simulations.Comment: Major changes from previous version. Particularly on the second order error approximation and implication

    DISTRIBUTED PRINCIPAL COMPONENT ANALYSIS ON NETWORKS VIA DIRECTED GRAPHICAL MODELS

    No full text
    We introduce an efficient algorithm for performing distributed principal component analysis (PCA) on directed Gaussian graphical models. By exploiting structured sparsity in the Cholesky factor of the inverse covariance (concentration) matrix, our proposed DDPCA algorithm computes global principal subspace estimation through local computation and message passing. We show significant performance and computation/communication advantages of DDPCA for online principal subspace estimation and distributed anomaly detection in real-world computer networks. Index Terms — Graphical models, principal component analysis, anomaly detection, distributed PCA, subspace tracking. 1

    Distributed Learning, Prediction and Detection in Probabilistic Graphs.

    Full text link
    Critical to high-dimensional statistical estimation is to exploit the structure in the data distribution. Probabilistic graphical models provide an efficient framework for representing complex joint distributions of random variables through their conditional dependency graph, and can be adapted to many high-dimensional machine learning applications. This dissertation develops the probabilistic graphical modeling technique for three statistical estimation problems arising in real-world applications: distributed and parallel learning in networks, missing-value prediction in recommender systems, and emerging topic detection in text corpora. The common theme behind all proposed methods is a combination of parsimonious representation of uncertainties in the data, optimization surrogate that leads to computationally efficient algorithms, and fundamental limits of estimation performance in high dimension. More specifically, the dissertation makes the following theoretical contributions: (1) We propose a distributed and parallel framework for learning the parameters in Gaussian graphical models that is free of iterative global message passing. The proposed distributed estimator is shown to be asymptotically consistent, improve with increasing local neighborhood sizes, and have a high-dimensional error rate comparable to that of the centralized maximum likelihood estimator. (2) We present a family of latent variable Gaussian graphical models whose marginal precision matrix has a “low-rank plus sparse” structure. Under mild conditions, we analyze the high-dimensional parameter error bounds for learning this family of models using regularized maximum likelihood estimation. (3) We consider a hypothesis testing framework for detecting emerging topics in topic models, and propose a novel surrogate test statistic for the standard likelihood ratio. By leveraging the theory of empirical processes, we prove asymptotic consistency for the proposed test and provide guarantees of the detection performance.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110499/1/mengzs_1.pd
    corecore