40,430 research outputs found

    Upscaling Vector Approximate Message Passing

    Get PDF

    Vector Approximate Message Passing for the Generalized Linear Model

    Full text link
    The generalized linear model (GLM), where a random vector x\boldsymbol{x} is observed through a noisy, possibly nonlinear, function of a linear transform output z=Ax\boldsymbol{z}=\boldsymbol{Ax}, arises in a range of applications such as robust regression, binary classification, quantized compressed sensing, phase retrieval, photon-limited imaging, and inference from neural spike trains. When A\boldsymbol{A} is large and i.i.d. Gaussian, the generalized approximate message passing (GAMP) algorithm is an efficient means of MAP or marginal inference, and its performance can be rigorously characterized by a scalar state evolution. For general A\boldsymbol{A}, though, GAMP can misbehave. Damping and sequential-updating help to robustify GAMP, but their effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for additive white Gaussian noise channels. VAMP extends AMP's guarantees from i.i.d. Gaussian A\boldsymbol{A} to the larger class of rotationally invariant A\boldsymbol{A}. In this paper, we show how VAMP can be extended to the GLM. Numerical experiments show that the proposed GLM-VAMP is much more robust to ill-conditioning in A\boldsymbol{A} than damped GAMP

    Compressed Sensing with Upscaled Vector Approximate Message Passing

    Get PDF
    Recently proposed Vector Approximate Message Passing (VAMP) demonstrates a great reconstruction potential at solving compressed sensing related linear inverse problems. VAMP provides high per-iteration improvement, can utilize powerful denoisers like BM3D, has rigorously defined dynamics and is able to recover signals sampled by highly undersampled and ill-conditioned linear operators. Yet, its applicability is limited to relatively small problem sizes due to necessity to compute the expensive LMMSE estimator at each iteration. In this work we consider the problem of upscaling VAMP by utilizing Conjugate Gradient (CG) to approximate the intractable LMMSE estimator and propose a CG-VAMP algorithm that can efficiently recover large-scale data. We derive evolution models of certain key parameters of CG-VAMP and use the theoretical results to develop fast and practical tools for correcting, tuning and accelerating the CG algorithm within CG-VAMP to preserve all the main advantages of VAMP, while maintaining reasonable and controllable computational cost of the algorithm

    Compressed Sensing with Upscaled Vector Approximate Message Passing

    Get PDF
    • …
    corecore