2,164 research outputs found

    The dynamics of message passing on dense graphs, with applications to compressed sensing

    Full text link
    Approximate message passing algorithms proved to be extremely effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper we provide the first rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs. The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with large number of short loops in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory.Comment: 41 page

    On Low-rank Trace Regression under General Sampling Distribution

    Full text link
    A growing number of modern statistical learning problems involve estimating a large number of parameters from a (smaller) number of noisy observations. In a subset of these problems (matrix completion, matrix compressed sensing, and multi-task learning) the unknown parameters form a high-dimensional matrix B*, and two popular approaches for the estimation are convex relaxation of rank-penalized regression or non-convex optimization. It is also known that these estimators satisfy near optimal error bounds under assumptions on rank, coherence, or spikiness of the unknown matrix. In this paper, we introduce a unifying technique for analyzing all of these problems via both estimators that leads to short proofs for the existing results as well as new results. Specifically, first we introduce a general notion of spikiness for B* and consider a general family of estimators and prove non-asymptotic error bounds for the their estimation error. Our approach relies on a generic recipe to prove restricted strong convexity for the sampling operator of the trace regression. Second, and most notably, we prove similar error bounds when the regularization parameter is chosen via K-fold cross-validation. This result is significant in that existing theory on cross-validated estimators do not apply to our setting since our estimators are not known to satisfy their required notion of stability. Third, we study applications of our general results to four subproblems of (1) matrix completion, (2) multi-task learning, (3) compressed sensing with Gaussian ensembles, and (4) compressed sensing with factored measurements. For (1), (3), and (4) we recover matching error bounds as those found in the literature, and for (2) we obtain (to the best of our knowledge) the first such error bound. We also demonstrate how our frameworks applies to the exact recovery problem in (3) and (4).Comment: 32 pages, 1 figur
    • …
    corecore