40,430 research outputs found
Vector Approximate Message Passing for the Generalized Linear Model
The generalized linear model (GLM), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
output , arises in a range of applications such
as robust regression, binary classification, quantized compressed sensing,
phase retrieval, photon-limited imaging, and inference from neural spike
trains. When is large and i.i.d. Gaussian, the generalized
approximate message passing (GAMP) algorithm is an efficient means of MAP or
marginal inference, and its performance can be rigorously characterized by a
scalar state evolution. For general , though, GAMP can
misbehave. Damping and sequential-updating help to robustify GAMP, but their
effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for
additive white Gaussian noise channels. VAMP extends AMP's guarantees from
i.i.d. Gaussian to the larger class of rotationally invariant
. In this paper, we show how VAMP can be extended to the GLM.
Numerical experiments show that the proposed GLM-VAMP is much more robust to
ill-conditioning in than damped GAMP
Compressed Sensing with Upscaled Vector Approximate Message Passing
Recently proposed Vector Approximate Message Passing (VAMP) demonstrates a
great reconstruction potential at solving compressed sensing related linear
inverse problems. VAMP provides high per-iteration improvement, can utilize
powerful denoisers like BM3D, has rigorously defined dynamics and is able to
recover signals sampled by highly undersampled and ill-conditioned linear
operators. Yet, its applicability is limited to relatively small problem sizes
due to necessity to compute the expensive LMMSE estimator at each iteration. In
this work we consider the problem of upscaling VAMP by utilizing Conjugate
Gradient (CG) to approximate the intractable LMMSE estimator and propose a
CG-VAMP algorithm that can efficiently recover large-scale data. We derive
evolution models of certain key parameters of CG-VAMP and use the theoretical
results to develop fast and practical tools for correcting, tuning and
accelerating the CG algorithm within CG-VAMP to preserve all the main
advantages of VAMP, while maintaining reasonable and controllable computational
cost of the algorithm
- …