37 research outputs found
Approximate Message Passing for Underdetermined Audio Source Separation
Approximate message passing (AMP) algorithms have shown great promise in
sparse signal reconstruction due to their low computational requirements and
fast convergence to an exact solution. Moreover, they provide a probabilistic
framework that is often more intuitive than alternatives such as convex
optimisation. In this paper, AMP is used for audio source separation from
underdetermined instantaneous mixtures. In the time-frequency domain, it is
typical to assume a priori that the sources are sparse, so we solve the
corresponding sparse linear inverse problem using AMP. We present a block-based
approach that uses AMP to process multiple time-frequency points
simultaneously. Two algorithms known as AMP and vector AMP (VAMP) are evaluated
in particular. Results show that they are promising in terms of artefact
suppression.Comment: Paper accepted for 3rd International Conference on Intelligent Signal
Processing (ISP 2017
Vector Approximate Message Passing for the Generalized Linear Model
The generalized linear model (GLM), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
output , arises in a range of applications such
as robust regression, binary classification, quantized compressed sensing,
phase retrieval, photon-limited imaging, and inference from neural spike
trains. When is large and i.i.d. Gaussian, the generalized
approximate message passing (GAMP) algorithm is an efficient means of MAP or
marginal inference, and its performance can be rigorously characterized by a
scalar state evolution. For general , though, GAMP can
misbehave. Damping and sequential-updating help to robustify GAMP, but their
effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for
additive white Gaussian noise channels. VAMP extends AMP's guarantees from
i.i.d. Gaussian to the larger class of rotationally invariant
. In this paper, we show how VAMP can be extended to the GLM.
Numerical experiments show that the proposed GLM-VAMP is much more robust to
ill-conditioning in than damped GAMP
Robust phase retrieval with the swept approximate message passing (prSAMP) algorithm
In phase retrieval, the goal is to recover a complex signal from the
magnitude of its linear measurements. While many well-known algorithms
guarantee deterministic recovery of the unknown signal using i.i.d. random
measurement matrices, they suffer serious convergence issues some
ill-conditioned matrices. As an example, this happens in optical imagers using
binary intensity-only spatial light modulators to shape the input wavefront.
The problem of ill-conditioned measurement matrices has also been a topic of
interest for compressed sensing researchers during the past decade. In this
paper, using recent advances in generic compressed sensing, we propose a new
phase retrieval algorithm that well-adopts for both Gaussian i.i.d. and binary
matrices using both sparse and dense input signals. This algorithm is also
robust to the strong noise levels found in some imaging applications
Approximate Message Passing in Coded Aperture Snapshot Spectral Imaging
We consider a compressive hyperspectral imaging reconstruction problem, where
three-dimensional spatio-spectral information about a scene is sensed by a
coded aperture snapshot spectral imager (CASSI). The approximate message
passing (AMP) framework is utilized to reconstruct hyperspectral images from
CASSI measurements, and an adaptive Wiener filter is employed as a
three-dimensional image denoiser within AMP. We call our algorithm
"AMP-3D-Wiener." The simulation results show that AMP-3D-Wiener outperforms
existing widely-used algorithms such as gradient projection for sparse
reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST)
given the same amount of runtime. Moreover, in contrast to GPSR and TwIST,
AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction
process.Comment: to appear in Globalsip 201
An Overview of Multi-Processor Approximate Message Passing
Approximate message passing (AMP) is an algorithmic framework for solving
linear inverse problems from noisy measurements, with exciting applications
such as reconstructing images, audio, hyper spectral images, and various other
signals, including those acquired in compressive signal acquisiton systems. The
growing prevalence of big data systems has increased interest in large-scale
problems, which may involve huge measurement matrices that are unsuitable for
conventional computing systems. To address the challenge of large-scale
processing, multiprocessor (MP) versions of AMP have been developed. We provide
an overview of two such MP-AMP variants. In row-MP-AMP, each computing node
stores a subset of the rows of the matrix and processes corresponding
measurements. In column- MP-AMP, each node stores a subset of columns, and is
solely responsible for reconstructing a portion of the signal. We will discuss
pros and cons of both approaches, summarize recent research results for each,
and explain when each one may be a viable approach. Aspects that are
highlighted include some recent results on state evolution for both MP-AMP
algorithms, and the use of data compression to reduce communication in the MP
network
MMSE of probabilistic low-rank matrix estimation: Universality with respect to the output channel
This paper considers probabilistic estimation of a low-rank matrix from
non-linear element-wise measurements of its elements. We derive the
corresponding approximate message passing (AMP) algorithm and its state
evolution. Relying on non-rigorous but standard assumptions motivated by
statistical physics, we characterize the minimum mean squared error (MMSE)
achievable information theoretically and with the AMP algorithm. Unlike in
related problems of linear estimation, in the present setting the MMSE depends
on the output channel only trough a single parameter - its Fisher information.
We illustrate this striking finding by analysis of submatrix localization, and
of detection of communities hidden in a dense stochastic block model. For this
example we locate the computational and statistical boundaries that are not
equal for rank larger than four.Comment: 10 pages, Allerton Conference on Communication, Control, and
Computing 201