848 research outputs found
Compressed Sensing over -balls: Minimax Mean Square Error
We consider the compressed sensing problem, where the object x_0 \in \bR^N
is to be recovered from incomplete measurements ; here the
sensing matrix is an random matrix with iid Gaussian entries
and . A popular method of sparsity-promoting reconstruction is
-penalized least-squares reconstruction (aka LASSO, Basis Pursuit).
It is currently popular to consider the strict sparsity model, where the
object is nonzero in only a small fraction of entries. In this paper, we
instead consider the much more broadly applicable -sparsity model,
where is sparse in the sense of having norm bounded by for some fixed .
We study an asymptotic regime in which and both tend to infinity with
limiting ratio , both in the noisy () and
noiseless () cases. Under weak assumptions on , we are able to
precisely evaluate the worst-case asymptotic minimax mean-squared
reconstruction error (AMSE) for penalized least-squares: min over
penalization parameters, max over -sparse objects . We exhibit the
asymptotically least-favorable object (hardest sparse signal to recover) and
the maximin penalization.
Our explicit formulas unexpectedly involve quantities appearing classically
in statistical decision theory. Occurring in the present setting, they reflect
a deeper connection between penalized minimization and scalar soft
thresholding. This connection, which follows from earlier work of the authors
and collaborators on the AMP iterative thresholding algorithm, is carefully
explained.
Our approach also gives precise results under weak- ball coefficient
constraints, as we show here.Comment: 41 pages, 11 pdf figure
Atomic norm denoising with applications to line spectral estimation
Motivated by recent work on atomic norms in inverse problems, we propose a
new approach to line spectral estimation that provides theoretical guarantees
for the mean-squared-error (MSE) performance in the presence of noise and
without knowledge of the model order. We propose an abstract theory of
denoising with atomic norms and specialize this theory to provide a convex
optimization problem for estimating the frequencies and phases of a mixture of
complex exponentials. We show that the associated convex optimization problem
can be solved in polynomial time via semidefinite programming (SDP). We also
show that the SDP can be approximated by an l1-regularized least-squares
problem that achieves nearly the same error rate as the SDP but can scale to
much larger problems. We compare both SDP and l1-based approaches with
classical line spectral analysis methods and demonstrate that the SDP
outperforms the l1 optimization which outperforms MUSIC, Cadzow's, and Matrix
Pencil approaches in terms of MSE over a wide range of signal-to-noise ratios.Comment: 27 pages, 10 figures. A preliminary version of this work appeared in
the Proceedings of the 49th Annual Allerton Conference in September 2011.
Numerous numerical experiments added to this version in accordance with
suggestions by anonymous reviewer
Approximate Message Passing-based Compressed Sensing Reconstruction with Generalized Elastic Net Prior
In this paper, we study the compressed sensing reconstruction problem with generalized elastic net prior (GENP), where a sparse signal is sampled via a noisy underdetermined linear observation system, and an additional initial estimation of the signal (the GENP) is available during the reconstruction. We first incorporate the GENP into the LASSO and the approximate message passing (AMP) frameworks, denoted by GENP-LASSO and GENP-AMP respectively. We then focus on GENP-AMP and investigate its parameter selection, state evolution, and noise-sensitivity analysis. A practical parameterless version of the GENP-AMP is also developed, which does not need to know the sparsity of the unknown signal and the variance of the GENP. Simulation results with 1-D data and two different imaging applications are presented to demonstrate the efficiency of the proposed schemes
Optimal Phase Transitions in Compressed Sensing
Compressed sensing deals with efficient recovery of analog signals from
linear encodings. This paper presents a statistical study of compressed sensing
by modeling the input signal as an i.i.d. process with known distribution.
Three classes of encoders are considered, namely optimal nonlinear, optimal
linear and random linear encoders. Focusing on optimal decoders, we investigate
the fundamental tradeoff between measurement rate and reconstruction fidelity
gauged by error probability and noise sensitivity in the absence and presence
of measurement noise, respectively. The optimal phase transition threshold is
determined as a functional of the input distribution and compared to suboptimal
thresholds achieved by popular reconstruction algorithms. In particular, we
show that Gaussian sensing matrices incur no penalty on the phase transition
threshold with respect to optimal nonlinear encoding. Our results also provide
a rigorous justification of previous results based on replica heuristics in the
weak-noise regime.Comment: to appear in IEEE Transactions of Information Theor
- …