6 research outputs found

    Compressed Sensing with Upscaled Vector Approximate Message Passing

    Get PDF
    Recently proposed Vector Approximate Message Passing (VAMP) demonstrates a great reconstruction potential at solving compressed sensing related linear inverse problems. VAMP provides high per-iteration improvement, can utilize powerful denoisers like BM3D, has rigorously defined dynamics and is able to recover signals sampled by highly undersampled and ill-conditioned linear operators. Yet, its applicability is limited to relatively small problem sizes due to necessity to compute the expensive LMMSE estimator at each iteration. In this work we consider the problem of upscaling VAMP by utilizing Conjugate Gradient (CG) to approximate the intractable LMMSE estimator and propose a CG-VAMP algorithm that can efficiently recover large-scale data. We derive evolution models of certain key parameters of CG-VAMP and use the theoretical results to develop fast and practical tools for correcting, tuning and accelerating the CG algorithm within CG-VAMP to preserve all the main advantages of VAMP, while maintaining reasonable and controllable computational cost of the algorithm

    Decentralized Generalized Approximate Message-Passing for Tree-Structured Networks

    Full text link
    Decentralized generalized approximate message-passing (GAMP) is proposed for compressed sensing from distributed generalized linear measurements in a tree-structured network. Consensus propagation is used to realize average consensus required in GAMP via local communications between adjacent nodes. Decentralized GAMP is applicable to all tree-structured networks that do not necessarily have central nodes connected to all other nodes. State evolution is used to analyze the asymptotic dynamics of decentralized GAMP for zero-mean independent and identically distributed Gaussian sensing matrices. The state evolution recursion for decentralized GAMP is proved to have the same fixed points as that for centralized GAMP when homogeneous measurements with an identical dimension in all nodes are considered. Furthermore, existing long-memory proof strategy is used to prove that the state evolution recursion for decentralized GAMP with the Bayes-optimal denoisers converges to a fixed point. These results imply that the state evolution recursion for decentralized GAMP with the Bayes-optimal denoisers converges to the Bayes-optimal fixed point for the homogeneous measurements when the fixed point is unique. Numerical results for decentralized GAMP are presented in the cases of linear measurements and clipping. As examples of tree-structured networks, a one-dimensional chain and a tree with no central nodes are considered.Comment: submitted to IEEE Trans. Inf. Theor
    corecore