133 research outputs found

    Dynamical Functional Theory for Compressed Sensing

    Get PDF
    We introduce a theoretical approach for designing generalizations of the approximate message passing (AMP) algorithm for compressed sensing which are valid for large observation matrices that are drawn from an invariant random matrix ensemble. By design, the fixed points of the algorithm obey the Thouless-Anderson-Palmer (TAP) equations corresponding to the ensemble. Using a dynamical functional approach we are able to derive an effective stochastic process for the marginal statistics of a single component of the dynamics. This allows us to design memory terms in the algorithm in such a way that the resulting fields become Gaussian random variables allowing for an explicit analysis. The asymptotic statistics of these fields are consistent with the replica ansatz of the compressed sensing problem.Comment: 5 pages, accepted for ISIT 201

    On Capacity Optimality of OAMP: Beyond IID Sensing Matrices and Gaussian Signaling

    Full text link
    This paper investigates a large unitarily invariant system (LUIS) involving a unitarily invariant sensing matrix, an arbitrarily fixed signal distribution, and forward error control (FEC) coding. A universal Gram-Schmidt orthogonalization is considered for the construction of orthogonal approximate message passing (OAMP), which renders the results applicable to general prototypes without the differentiability restriction. For OAMP with Lipschitz continuous local estimators, we develop two variational single-input-single-output transfer functions, based on which we analyze the achievable rate of OAMP. Furthermore, when the state evolution of OAMP has a unique fixed point, we reveal that OAMP reaches the constrained capacity predicted by the replica method of the LUIS with an arbitrary signal distribution based on matched FEC coding. The replica method is rigorous for LUIS with Gaussian signaling and for certain sub-classes of LUIS with arbitrary signal distributions. Several area properties are established based on the variational transfer functions of OAMP. Meanwhile, we elaborate a replica constrained capacity-achieving coding principle for LUIS, based on which irregular low-density parity-check (LDPC) codes are optimized for binary signaling in the simulation results. We show that OAMP with the optimized codes has significant performance improvement over the un-optimized ones and the well-known Turbo linear MMSE algorithm. For quadrature phase-shift keying (QPSK) modulation, replica constrained capacity-approaching bit error rate (BER) performances are observed under various channel conditions.Comment: Single column, 34 pages, 9 figure

    Expectation Propagation for Approximate Inference: Free Probability Framework

    Full text link
    We study asymptotic properties of expectation propagation (EP) -- a method for approximate inference originally developed in the field of machine learning. Applied to generalized linear models, EP iteratively computes a multivariate Gaussian approximation to the exact posterior distribution. The computational complexity of the repeated update of covariance matrices severely limits the application of EP to large problem sizes. In this study, we present a rigorous analysis by means of free probability theory that allows us to overcome this computational bottleneck if specific data matrices in the problem fulfill certain properties of asymptotic freeness. We demonstrate the relevance of our approach on the gene selection problem of a microarray dataset.Comment: Both authors are co-first authors. The main body of this paper is accepted for publication in the proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT

    Compressed Sensing with Upscaled Vector Approximate Message Passing

    Get PDF
    Recently proposed Vector Approximate Message Passing (VAMP) demonstrates a great reconstruction potential at solving compressed sensing related linear inverse problems. VAMP provides high per-iteration improvement, can utilize powerful denoisers like BM3D, has rigorously defined dynamics and is able to recover signals sampled by highly undersampled and ill-conditioned linear operators. Yet, its applicability is limited to relatively small problem sizes due to necessity to compute the expensive LMMSE estimator at each iteration. In this work we consider the problem of upscaling VAMP by utilizing Conjugate Gradient (CG) to approximate the intractable LMMSE estimator and propose a CG-VAMP algorithm that can efficiently recover large-scale data. We derive evolution models of certain key parameters of CG-VAMP and use the theoretical results to develop fast and practical tools for correcting, tuning and accelerating the CG algorithm within CG-VAMP to preserve all the main advantages of VAMP, while maintaining reasonable and controllable computational cost of the algorithm
    • …
    corecore