186 research outputs found
Orthonormal Expansion l1-Minimization Algorithms for Compressed Sensing
Compressed sensing aims at reconstructing sparse signals from significantly
reduced number of samples, and a popular reconstruction approach is
-norm minimization. In this correspondence, a method called orthonormal
expansion is presented to reformulate the basis pursuit problem for noiseless
compressed sensing. Two algorithms are proposed based on convex optimization:
one exactly solves the problem and the other is a relaxed version of the first
one. The latter can be considered as a modified iterative soft thresholding
algorithm and is easy to implement. Numerical simulation shows that, in dealing
with noise-free measurements of sparse signals, the relaxed version is
accurate, fast and competitive to the recent state-of-the-art algorithms. Its
practical application is demonstrated in a more general case where signals of
interest are approximately sparse and measurements are contaminated with noise.Comment: 7 pages, 2 figures, 1 tabl
On Phase Transition of Compressed Sensing in the Complex Domain
The phase transition is a performance measure of the sparsity-undersampling
tradeoff in compressed sensing (CS). This letter reports our first observation
and evaluation of an empirical phase transition of the minimization
approach to the complex valued CS (CVCS), which is positioned well above the
known phase transition of the real valued CS in the phase plane. This result
can be considered as an extension of the existing phase transition theory of
the block-sparse CS (BSCS) based on the universality argument, since the CVCS
problem does not meet the condition required by the phase transition theory of
BSCS but its observed phase transition coincides with that of BSCS. Our result
is obtained by applying the recently developed ONE-L1 algorithms to the
empirical evaluation of the phase transition of CVCS.Comment: 4 pages, 3 figure
Message Passing Algorithms for Compressed Sensing
Compressed sensing aims to undersample certain high-dimensional signals, yet
accurately reconstruct them by exploiting signal characteristics. Accurate
reconstruction is possible when the object to be recovered is sufficiently
sparse in a known basis. Currently, the best known sparsity-undersampling
tradeoff is achieved when reconstructing by convex optimization -- which is
expensive in important large-scale applications. Fast iterative thresholding
algorithms have been intensively studied as alternatives to convex optimization
for large-scale problems. Unfortunately known fast algorithms offer
substantially worse sparsity-undersampling tradeoffs than convex optimization.
We introduce a simple costless modification to iterative thresholding making
the sparsity-undersampling tradeoff of the new algorithms equivalent to that of
the corresponding convex optimization procedures. The new
iterative-thresholding algorithms are inspired by belief propagation in
graphical models. Our empirical measurements of the sparsity-undersampling
tradeoff for the new algorithms agree with theoretical calculations. We show
that a state evolution formalism correctly derives the true
sparsity-undersampling tradeoff. There is a surprising agreement between
earlier calculations based on random convex polytopes and this new, apparently
very different theoretical formalism.Comment: 6 pages paper + 9 pages supplementary information, 13 eps figure.
Submitted to Proc. Natl. Acad. Sci. US
DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks
In this paper we develop a novel computational sensing framework for sensing
and recovering structured signals. When trained on a set of representative
signals, our framework learns to take undersampled measurements and recover
signals from them using a deep convolutional neural network. In other words, it
learns a transformation from the original signals to a near-optimal number of
undersampled measurements and the inverse transformation from measurements to
signals. This is in contrast to traditional compressive sensing (CS) systems
that use random linear measurements and convex optimization or iterative
algorithms for signal recovery. We compare our new framework with
-minimization from the phase transition point of view and demonstrate
that it outperforms -minimization in the regions of phase transition
plot where -minimization cannot recover the exact solution. In
addition, we experimentally demonstrate how learning measurements enhances the
overall recovery performance, speeds up training of recovery framework, and
leads to having fewer parameters to learn
COMPRESSIVE PARAMETER ESTIMATION VIA APPROXIMATE MESSAGE PASSING
The literature on compressive parameter estimation has been mostly focused on the use of sparsity dictionaries that encode a discretized sampling of the parameter space; these dictionaries, however, suffer from coherence issues that must be controlled for successful estimation. To bypass such issues with discretization, we propose the use of statistical parameter estimation methods within the Approximate Message Passing (AMP) algorithm for signal recovery. Our method leverages the recently proposed use of custom denoisers in place of the usual thresholding steps (which act as denoisers for sparse signals) in AMP. We introduce the design of analog denoisers that are based on statistical parameter estimation algorithms, and we focus on two commonly used examples: frequency estimation and bearing estimation, coupled with the Root MUSIC estimation algorithm. We first analyze the performance of the proposed analog denoiser for signal recovery, and then link the performance in signal estimation to that of parameter estimation. Numerical experiments show significant improvements in estimation performance versus previously proposed approaches for compressive parameter estimation
- β¦