869 research outputs found

    Vector Approximate Message Passing for the Generalized Linear Model

    Full text link
    The generalized linear model (GLM), where a random vector x\boldsymbol{x} is observed through a noisy, possibly nonlinear, function of a linear transform output z=Ax\boldsymbol{z}=\boldsymbol{Ax}, arises in a range of applications such as robust regression, binary classification, quantized compressed sensing, phase retrieval, photon-limited imaging, and inference from neural spike trains. When A\boldsymbol{A} is large and i.i.d. Gaussian, the generalized approximate message passing (GAMP) algorithm is an efficient means of MAP or marginal inference, and its performance can be rigorously characterized by a scalar state evolution. For general A\boldsymbol{A}, though, GAMP can misbehave. Damping and sequential-updating help to robustify GAMP, but their effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for additive white Gaussian noise channels. VAMP extends AMP's guarantees from i.i.d. Gaussian A\boldsymbol{A} to the larger class of rotationally invariant A\boldsymbol{A}. In this paper, we show how VAMP can be extended to the GLM. Numerical experiments show that the proposed GLM-VAMP is much more robust to ill-conditioning in A\boldsymbol{A} than damped GAMP

    Phase Retrieval From Binary Measurements

    Full text link
    We consider the problem of signal reconstruction from quadratic measurements that are encoded as +1 or -1 depending on whether they exceed a predetermined positive threshold or not. Binary measurements are fast to acquire and inexpensive in terms of hardware. We formulate the problem of signal reconstruction using a consistency criterion, wherein one seeks to find a signal that is in agreement with the measurements. To enforce consistency, we construct a convex cost using a one-sided quadratic penalty and minimize it using an iterative accelerated projected gradient-descent (APGD) technique. The PGD scheme reduces the cost function in each iteration, whereas incorporating momentum into PGD, notwithstanding the lack of such a descent property, exhibits faster convergence than PGD empirically. We refer to the resulting algorithm as binary phase retrieval (BPR). Considering additive white noise contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB) for the binary encoding model. Experimental results demonstrate that the BPR algorithm yields a signal-to- reconstruction error ratio (SRER) of approximately 25 dB in the absence of noise. In the presence of noise prior to quantization, the SRER is within 2 to 3 dB of the CRB

    Measure What Should be Measured: Progress and Challenges in Compressive Sensing

    Full text link
    Is compressive sensing overrated? Or can it live up to our expectations? What will come after compressive sensing and sparsity? And what has Galileo Galilei got to do with it? Compressive sensing has taken the signal processing community by storm. A large corpus of research devoted to the theory and numerics of compressive sensing has been published in the last few years. Moreover, compressive sensing has inspired and initiated intriguing new research directions, such as matrix completion. Potential new applications emerge at a dazzling rate. Yet some important theoretical questions remain open, and seemingly obvious applications keep escaping the grip of compressive sensing. In this paper I discuss some of the recent progress in compressive sensing and point out key challenges and opportunities as the area of compressive sensing and sparse representations keeps evolving. I also attempt to assess the long-term impact of compressive sensing
    • …
    corecore