497 research outputs found
Compressive Imaging via Approximate Message Passing with Image Denoising
We consider compressive imaging problems, where images are reconstructed from
a reduced number of linear measurements. Our objective is to improve over
existing compressive imaging algorithms in terms of both reconstruction error
and runtime. To pursue our objective, we propose compressive imaging algorithms
that employ the approximate message passing (AMP) framework. AMP is an
iterative signal reconstruction algorithm that performs scalar denoising at
each iteration; in order for AMP to reconstruct the original input signal well,
a good denoiser must be used. We apply two wavelet based image denoisers within
AMP. The first denoiser is the "amplitude-scaleinvariant Bayes estimator"
(ABE), and the second is an adaptive Wiener filter; we call our AMP based
algorithms for compressive imaging AMP-ABE and AMP-Wiener. Numerical results
show that both AMP-ABE and AMP-Wiener significantly improve over the state of
the art in terms of runtime. In terms of reconstruction quality, AMP-Wiener
offers lower mean square error (MSE) than existing compressive imaging
algorithms. In contrast, AMP-ABE has higher MSE, because ABE does not denoise
as well as the adaptive Wiener filter.Comment: 15 pages; 2 tables; 7 figures; to appear in IEEE Trans. Signal
Proces
Approximate Message Passing in Coded Aperture Snapshot Spectral Imaging
We consider a compressive hyperspectral imaging reconstruction problem, where
three-dimensional spatio-spectral information about a scene is sensed by a
coded aperture snapshot spectral imager (CASSI). The approximate message
passing (AMP) framework is utilized to reconstruct hyperspectral images from
CASSI measurements, and an adaptive Wiener filter is employed as a
three-dimensional image denoiser within AMP. We call our algorithm
"AMP-3D-Wiener." The simulation results show that AMP-3D-Wiener outperforms
existing widely-used algorithms such as gradient projection for sparse
reconstruction (GPSR) and two-step iterative shrinkage/thresholding (TwIST)
given the same amount of runtime. Moreover, in contrast to GPSR and TwIST,
AMP-3D-Wiener need not tune any parameters, which simplifies the reconstruction
process.Comment: to appear in Globalsip 201
An Overview of Multi-Processor Approximate Message Passing
Approximate message passing (AMP) is an algorithmic framework for solving
linear inverse problems from noisy measurements, with exciting applications
such as reconstructing images, audio, hyper spectral images, and various other
signals, including those acquired in compressive signal acquisiton systems. The
growing prevalence of big data systems has increased interest in large-scale
problems, which may involve huge measurement matrices that are unsuitable for
conventional computing systems. To address the challenge of large-scale
processing, multiprocessor (MP) versions of AMP have been developed. We provide
an overview of two such MP-AMP variants. In row-MP-AMP, each computing node
stores a subset of the rows of the matrix and processes corresponding
measurements. In column- MP-AMP, each node stores a subset of columns, and is
solely responsible for reconstructing a portion of the signal. We will discuss
pros and cons of both approaches, summarize recent research results for each,
and explain when each one may be a viable approach. Aspects that are
highlighted include some recent results on state evolution for both MP-AMP
algorithms, and the use of data compression to reduce communication in the MP
network
Vector Approximate Message Passing for the Generalized Linear Model
The generalized linear model (GLM), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
output , arises in a range of applications such
as robust regression, binary classification, quantized compressed sensing,
phase retrieval, photon-limited imaging, and inference from neural spike
trains. When is large and i.i.d. Gaussian, the generalized
approximate message passing (GAMP) algorithm is an efficient means of MAP or
marginal inference, and its performance can be rigorously characterized by a
scalar state evolution. For general , though, GAMP can
misbehave. Damping and sequential-updating help to robustify GAMP, but their
effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for
additive white Gaussian noise channels. VAMP extends AMP's guarantees from
i.i.d. Gaussian to the larger class of rotationally invariant
. In this paper, we show how VAMP can be extended to the GLM.
Numerical experiments show that the proposed GLM-VAMP is much more robust to
ill-conditioning in than damped GAMP
- …