9,513 research outputs found
Vector Approximate Message Passing for the Generalized Linear Model
The generalized linear model (GLM), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
output , arises in a range of applications such
as robust regression, binary classification, quantized compressed sensing,
phase retrieval, photon-limited imaging, and inference from neural spike
trains. When is large and i.i.d. Gaussian, the generalized
approximate message passing (GAMP) algorithm is an efficient means of MAP or
marginal inference, and its performance can be rigorously characterized by a
scalar state evolution. For general , though, GAMP can
misbehave. Damping and sequential-updating help to robustify GAMP, but their
effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for
additive white Gaussian noise channels. VAMP extends AMP's guarantees from
i.i.d. Gaussian to the larger class of rotationally invariant
. In this paper, we show how VAMP can be extended to the GLM.
Numerical experiments show that the proposed GLM-VAMP is much more robust to
ill-conditioning in than damped GAMP
Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques
This paper investigates experimental means of measuring the transmission
matrix (TM) of a highly scattering medium, with the simplest optical setup.
Spatial light modulation is performed by a digital micromirror device (DMD),
allowing high rates and high pixel counts but only binary amplitude modulation.
We used intensity measurement only, thus avoiding the need for a reference
beam. Therefore, the phase of the TM has to be estimated through signal
processing techniques of phase retrieval. Here, we compare four different phase
retrieval principles on noisy experimental data. We validate our estimations of
the TM on three criteria : quality of prediction, distribution of singular
values, and quality of focusing. Results indicate that Bayesian phase retrieval
algorithms with variational approaches provide a good tradeoff between the
computational complexity and the precision of the estimates
Blind Sensor Calibration using Approximate Message Passing
The ubiquity of approximately sparse data has led a variety of com- munities
to great interest in compressed sensing algorithms. Although these are very
successful and well understood for linear measurements with additive noise,
applying them on real data can be problematic if imperfect sensing devices
introduce deviations from this ideal signal ac- quisition process, caused by
sensor decalibration or failure. We propose a message passing algorithm called
calibration approximate message passing (Cal-AMP) that can treat a variety of
such sensor-induced imperfections. In addition to deriving the general form of
the algorithm, we numerically investigate two particular settings. In the
first, a fraction of the sensors is faulty, giving readings unrelated to the
signal. In the second, sensors are decalibrated and each one introduces a
different multiplicative gain to the measures. Cal-AMP shares the scalability
of approximate message passing, allowing to treat big sized instances of these
problems, and ex- perimentally exhibits a phase transition between domains of
success and failure.Comment: 27 pages, 9 figure
Cycle-based Cluster Variational Method for Direct and Inverse Inference
We elaborate on the idea that loop corrections to belief propagation could be
dealt with in a systematic way on pairwise Markov random fields, by using the
elements of a cycle basis to define region in a generalized belief propagation
setting. The region graph is specified in such a way as to avoid dual loops as
much as possible, by discarding redundant Lagrange multipliers, in order to
facilitate the convergence, while avoiding instabilities associated to minimal
factor graph construction. We end up with a two-level algorithm, where a belief
propagation algorithm is run alternatively at the level of each cycle and at
the inter-region level. The inverse problem of finding the couplings of a
Markov random field from empirical covariances can be addressed region wise. It
turns out that this can be done efficiently in particular in the Ising context,
where fixed point equations can be derived along with a one-parameter log
likelihood function to minimize. Numerical experiments confirm the
effectiveness of these considerations both for the direct and inverse MRF
inference.Comment: 47 pages, 16 figure
- …