1,016 research outputs found

    Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions

    Get PDF
    We develop a robust uncertainty principle for finite signals in C^N which states that for almost all subsets T,W of {0,...,N-1} such that |T|+|W| ~ (log N)^(-1/2) N, there is no sigal f supported on T whose discrete Fourier transform is supported on W. In fact, we can make the above uncertainty principle quantitative in the sense that if f is supported on T, then only a small percentage of the energy (less than half, say) of its Fourier transform is concentrated on W. As an application of this robust uncertainty principle (QRUP), we consider the problem of decomposing a signal into a sparse superposition of spikes and complex sinusoids. We show that if a generic signal f has a decomposition using spike and frequency locations in T and W respectively, and obeying |T| + |W| <= C (\log N)^{-1/2} N, then this is the unique sparsest possible decomposition (all other decompositions have more non-zero terms). In addition, if |T| + |W| <= C (\log N)^{-1} N, then this sparsest decomposition can be found by solving a convex optimization problem.Comment: 25 pages, 9 figure

    Uncertainty Relations for Shift-Invariant Analog Signals

    Full text link
    The past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely-generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite-dimensional problem.Comment: Accepted to IEEE Trans. on Inform. Theor

    The Sparsity Gap: Uncertainty Principles Proportional to Dimension

    Get PDF
    In an incoherent dictionary, most signals that admit a sparse representation admit a unique sparse representation. In other words, there is no way to express the signal without using strictly more atoms. This work demonstrates that sparse signals typically enjoy a higher privilege: each nonoptimal representation of the signal requires far more atoms than the sparsest representation-unless it contains many of the same atoms as the sparsest representation. One impact of this finding is to confer a certain degree of legitimacy on the particular atoms that appear in a sparse representation. This result can also be viewed as an uncertainty principle for random sparse signals over an incoherent dictionary.Comment: 6 pages. To appear in the Proceedings of the 44th Ann. IEEE Conf. on Information Sciences and System

    Channel Protection: Random Coding Meets Sparse Channels

    Full text link
    Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is that it can be unreliable if the channel is changing rapidly. In this paper, we show that randomly encoding the signal can protect it against channel uncertainty when the channel is sparse. Before transmission, the signal is mapped into a slightly longer codeword using a random matrix. From the received signal, we are able to simultaneously estimate the channel and recover the transmitted signal. We discuss two schemes for the recovery. Both of them exploit the sparsity of the underlying channel. We show that if the channel impulse response is sufficiently sparse, the transmitted signal can be recovered reliably.Comment: To appear in the proceedings of the 2009 IEEE Information Theory Workshop (Taormina

    Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets

    Full text link
    We present an uncertainty relation for the representation of signals in two different general (possibly redundant or incomplete) signal sets. This uncertainty relation is relevant for the analysis of signals containing two distinct features each of which can be described sparsely in a suitable general signal set. Furthermore, the new uncertainty relation is shown to lead to improved sparsity thresholds for recovery of signals that are sparse in general dictionaries. Specifically, our results improve on the well-known (1+1/d)/2(1+1/d)/2-threshold for dictionaries with coherence dd by up to a factor of two. Furthermore, we provide probabilistic recovery guarantees for pairs of general dictionaries that also allow us to understand which parts of a general dictionary one needs to randomize over to "weed out" the sparsity patterns that prohibit breaking the square-root bottleneck.Comment: submitted to IEEE Trans. Inf. Theor

    Decoding by Linear Programming

    Get PDF
    This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector fRnf \in \R^n from corrupted measurements y=Af+ey = A f + e. Here, AA is an mm by nn (coding) matrix and ee is an arbitrary and unknown vector of errors. Is it possible to recover ff exactly from the data yy? We prove that under suitable conditions on the coding matrix AA, the input ff is the unique solution to the 1\ell_1-minimization problem (x1:=ixi\|x\|_{\ell_1} := \sum_i |x_i|) mingRnyAg1 \min_{g \in \R^n} \| y - Ag \|_{\ell_1} provided that the support of the vector of errors is not too large, e0:={i:ei0}ρm\|e\|_{\ell_0} := |\{i : e_i \neq 0\}| \le \rho \cdot m for some ρ>0\rho > 0. In short, ff can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; ff is recovered exactly even in situations where a significant fraction of the output is corrupted.Comment: 22 pages, 4 figures, submitte

    Optimally Sparse Frames

    Full text link
    Frames have established themselves as a means to derive redundant, yet stable decompositions of a signal for analysis or transmission, while also promoting sparse expansions. However, when the signal dimension is large, the computation of the frame measurements of a signal typically requires a large number of additions and multiplications, and this makes a frame decomposition intractable in applications with limited computing budget. To address this problem, in this paper, we focus on frames in finite-dimensional Hilbert spaces and introduce sparsity for such frames as a new paradigm. In our terminology, a sparse frame is a frame whose elements have a sparse representation in an orthonormal basis, thereby enabling low-complexity frame decompositions. To introduce a precise meaning of optimality, we take the sum of the numbers of vectors needed of this orthonormal basis when expanding each frame vector as sparsity measure. We then analyze the recently introduced algorithm Spectral Tetris for construction of unit norm tight frames and prove that the tight frames generated by this algorithm are in fact optimally sparse with respect to the standard unit vector basis. Finally, we show that even the generalization of Spectral Tetris for the construction of unit norm frames associated with a given frame operator produces optimally sparse frames

    Geometric approach to error correcting codes and reconstruction of signals

    Full text link
    We develop an approach through geometric functional analysis to error correcting codes and to reconstruction of signals from few linear measurements. An error correcting code encodes an n-letter word x into an m-letter word y in such a way that x can be decoded correctly when any r letters of y are corrupted. We prove that most linear orthogonal transformations Q from R^n into R^m form efficient and robust robust error correcting codes over reals. The decoder (which corrects the corrupted components of y) is the metric projection onto the range of Q in the L_1 norm. An equivalent problem arises in signal processing: how to reconstruct a signal that belongs to a small class from few linear measurements? We prove that for most sets of Gaussian measurements, all signals of small support can be exactly reconstructed by the L_1 norm minimization. This is a substantial improvement of recent results of Donoho and of Candes and Tao. An equivalent problem in combinatorial geometry is the existence of a polytope with fixed number of facets and maximal number of lower-dimensional facets. We prove that most sections of the cube form such polytopes.Comment: 17 pages, 3 figure
    corecore