31 research outputs found
Scampi: a robust approximate message-passing framework for compressive imaging
Reconstruction of images from noisy linear measurements is a core problem in
image processing, for which convex optimization methods based on total
variation (TV) minimization have been the long-standing state-of-the-art. We
present an alternative probabilistic reconstruction procedure based on
approximate message-passing, Scampi, which operates in the compressive regime,
where the inverse imaging problem is underdetermined. While the proposed method
is related to the recently proposed GrAMPA algorithm of Borgerding, Schniter,
and Rangan, we further develop the probabilistic approach to compressive
imaging by introducing an expectation-maximizaiton learning of model
parameters, making the Scampi robust to model uncertainties. Additionally, our
numerical experiments indicate that Scampi can provide reconstruction
performance superior to both GrAMPA as well as convex approaches to TV
reconstruction. Finally, through exhaustive best-case experiments, we show that
in many cases the maximal performance of both Scampi and convex TV can be quite
close, even though the approaches are a prori distinct. The theoretical reasons
for this correspondence remain an open question. Nevertheless, the proposed
algorithm remains more practical, as it requires far less parameter tuning to
perform optimally.Comment: Presented at the 2015 International Meeting on High-Dimensional Data
Driven Science, Kyoto, Japa
Vector Approximate Message Passing for the Generalized Linear Model
The generalized linear model (GLM), where a random vector is
observed through a noisy, possibly nonlinear, function of a linear transform
output , arises in a range of applications such
as robust regression, binary classification, quantized compressed sensing,
phase retrieval, photon-limited imaging, and inference from neural spike
trains. When is large and i.i.d. Gaussian, the generalized
approximate message passing (GAMP) algorithm is an efficient means of MAP or
marginal inference, and its performance can be rigorously characterized by a
scalar state evolution. For general , though, GAMP can
misbehave. Damping and sequential-updating help to robustify GAMP, but their
effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for
additive white Gaussian noise channels. VAMP extends AMP's guarantees from
i.i.d. Gaussian to the larger class of rotationally invariant
. In this paper, we show how VAMP can be extended to the GLM.
Numerical experiments show that the proposed GLM-VAMP is much more robust to
ill-conditioning in than damped GAMP
Recommended from our members
Sparse Analysis Recovery via Iterative Cosupport Detection Estimation
Cosparse analysis model (CAM) provides a new signal processing paradigm for recovering cosparse signals with respect to a given analysis operator from the undersampled linear measurements in the context of emerging theory of compressed sensing (CS). The sparse analysis recovery/cosparse recovery is a key one brought up by this new paradigm. In this paper, we propose a new family of analysis pursuit algorithms for the sparse analysis recovery problem when the signals obey the cosparse analysis model, termed as iterative cosupport detection estimation (ICDE). ICDE is an algorithmic framework, which alternates between detecting a cosupport set of the unknown true signal and estimating the underlying signal by solving a truncated analysis pursuit problem on the detected cosupport. Further, we propose effective implementations of ICDE equipped with an efficient thresholding strategy for cosupport detection. Empirical performance comparisons show that ICDE is favorable in comparison with the state-of-the-art sparse analysis recovery algorithms. Source code of ICDE has been made publicly available on Github: https://github.com/songhp/ICDE.Beijing Natural Science Foundation (BNSF) under Grant No. 4194076, the Natural Science Foundation of Jiangsu Province under Grant No. BK20170558 and the China Scholarship Council (CSC, No. 202008320094)
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
On the Error in Phase Transition Computations for Compressed Sensing
Evaluating the statistical dimension is a common tool to determine the
asymptotic phase transition in compressed sensing problems with Gaussian
ensemble. Unfortunately, the exact evaluation of the statistical dimension is
very difficult and it has become standard to replace it with an upper-bound. To
ensure that this technique is suitable, [1] has introduced an upper-bound on
the gap between the statistical dimension and its approximation. In this work,
we first show that the error bound in [1] in some low-dimensional models such
as total variation and analysis minimization becomes poorly large.
Next, we develop a new error bound which significantly improves the estimation
gap compared to [1]. In particular, unlike the bound in [1] that is not
applicable to settings with overcomplete dictionaries, our bound exhibits a
decaying behavior in such cases