4,766 research outputs found
Applications of sparse approximation in communications
Sparse approximation problems abound in many scientific, mathematical, and engineering applications. These problems are defined by two competing notions: we approximate a signal vector as a linear combination of elementary atoms and we require that the approximation be both as accurate and as concise as possible. We introduce two natural and direct applications of these problems and algorithmic solutions in communications. We do so by constructing enhanced codebooks from base codebooks. We show that we can decode these enhanced codebooks in the presence of Gaussian noise. For MIMO wireless communication channels, we construct simultaneous sparse approximation problems and demonstrate that our algorithms can both decode the transmitted signals and estimate the channel parameters
Mixed Operators in Compressed Sensing
Applications of compressed sensing motivate the possibility of using
different operators to encode and decode a signal of interest. Since it is
clear that the operators cannot be too different, we can view the discrepancy
between the two matrices as a perturbation. The stability of L1-minimization
and greedy algorithms to recover the signal in the presence of additive noise
is by now well-known. Recently however, work has been done to analyze these
methods with noise in the measurement matrix, which generates a multiplicative
noise term. This new framework of generalized perturbations (i.e., both
additive and multiplicative noise) extends the prior work on stable signal
recovery from incomplete and inaccurate measurements of Candes, Romberg and Tao
using Basis Pursuit (BP), and of Needell and Tropp using Compressive Sampling
Matching Pursuit (CoSaMP). We show, under reasonable assumptions, that the
stability of the reconstructed signal by both BP and CoSaMP is limited by the
noise level in the observation. Our analysis extends easily to arbitrary greedy
methods.Comment: CISS 2010 (44th Annual Conference on Information Sciences and
Systems
The Restricted Isometry Property of Subsampled Fourier Matrices
A matrix satisfies the restricted isometry
property of order with constant if it preserves the
norm of all -sparse vectors up to a factor of . We prove
that a matrix obtained by randomly sampling rows from an Fourier matrix satisfies the restricted
isometry property of order with a fixed with high
probability. This improves on Rudelson and Vershynin (Comm. Pure Appl. Math.,
2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014).Comment: 16 page
Support Recovery of Sparse Signals
We consider the problem of exact support recovery of sparse signals via noisy
measurements. The main focus is the sufficient and necessary conditions on the
number of measurements for support recovery to be reliable. By drawing an
analogy between the problem of support recovery and the problem of channel
coding over the Gaussian multiple access channel, and exploiting mathematical
tools developed for the latter problem, we obtain an information theoretic
framework for analyzing the performance limits of support recovery. Sharp
sufficient and necessary conditions on the number of measurements in terms of
the signal sparsity level and the measurement noise level are derived.
Specifically, when the number of nonzero entries is held fixed, the exact
asymptotics on the number of measurements for support recovery is developed.
When the number of nonzero entries increases in certain manners, we obtain
sufficient conditions tighter than existing results. In addition, we show that
the proposed methodology can deal with a variety of models of sparse signal
recovery, hence demonstrating its potential as an effective analytical tool.Comment: 33 page
Highly Robust Error Correction by Convex Programming
This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors).
We show that if one encodes the information as Ax where A ∈
ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well
- …