123 research outputs found

    Uncertainty Relations for Shift-Invariant Analog Signals

    Full text link
    The past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely-generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite-dimensional problem.Comment: Accepted to IEEE Trans. on Inform. Theor

    Self-Dictionary Sparse Regression for Hyperspectral Unmixing: Greedy Pursuit and Pure Pixel Search are Related

    Full text link
    This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in that it allows simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise---including its identification of the (unknown) number of endmembers---under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments

    Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression

    Full text link
    The linear model, in which a set of observations is assumed to be given by a linear combination of columns of a matrix, has long been the mainstay of the statistics and signal processing literature. One particular challenge for inference under linear models is understanding the conditions on the dictionary under which reliable inference is possible. This challenge has attracted renewed attention in recent years since many modern inference problems deal with the "underdetermined" setting, in which the number of observations is much smaller than the number of columns in the dictionary. This paper makes several contributions for this setting when the set of observations is given by a linear combination of a small number of groups of columns of the dictionary, termed the "block-sparse" case. First, it specifies conditions on the dictionary under which most block subdictionaries are well conditioned. This result is fundamentally different from prior work on block-sparse inference because (i) it provides conditions that can be explicitly computed in polynomial time, (ii) the given conditions translate into near-optimal scaling of the number of columns of the block subdictionaries as a function of the number of observations for a large class of dictionaries, and (iii) it suggests that the spectral norm and the quadratic-mean block coherence of the dictionary (rather than the worst-case coherences) fundamentally limit the scaling of dimensions of the well-conditioned block subdictionaries. Second, this paper investigates the problems of block-sparse recovery and block-sparse regression in underdetermined settings. Near-optimal block-sparse recovery and regression are possible for certain dictionaries as long as the dictionary satisfies easily computable conditions and the coefficients describing the linear combination of groups of columns can be modeled through a mild statistical prior.Comment: 39 pages, 3 figures. A revised and expanded version of the paper published in IEEE Transactions on Information Theory (DOI: 10.1109/TIT.2015.2429632); this revision includes corrections in the proofs of some of the result

    A Compact Formulation for the â„“2,1\ell_{2,1} Mixed-Norm Minimization Problem

    Full text link
    Parameter estimation from multiple measurement vectors (MMVs) is a fundamental problem in many signal processing applications, e.g., spectral analysis and direction-of- arrival estimation. Recently, this problem has been address using prior information in form of a jointly sparse signal structure. A prominent approach for exploiting joint sparsity considers mixed-norm minimization in which, however, the problem size grows with the number of measurements and the desired resolution, respectively. In this work we derive an equivalent, compact reformulation of the â„“2,1\ell_{2,1} mixed-norm minimization problem which provides new insights on the relation between different existing approaches for jointly sparse signal reconstruction. The reformulation builds upon a compact parameterization, which models the row-norms of the sparse signal representation as parameters of interest, resulting in a significant reduction of the MMV problem size. Given the sparse vector of row-norms, the jointly sparse signal can be computed from the MMVs in closed form. For the special case of uniform linear sampling, we present an extension of the compact formulation for gridless parameter estimation by means of semidefinite programming. Furthermore, we derive in this case from our compact problem formulation the exact equivalence between the â„“2,1\ell_{2,1} mixed-norm minimization and the atomic-norm minimization. Additionally, for the case of irregular sampling or a large number of samples, we present a low complexity, grid-based implementation based on the coordinate descent method

    A Joint Doppler Frequency Shift and DOA Estimation Algorithm Based on Sparse Representations for Colocated TDM-MIMO Radar

    Get PDF
    We address the problem of a new joint Doppler frequency shift (DFS) and direction of arrival (DOA) estimation for colocated TDM-MIMO radar that is a novel technology applied to autocruise and safety driving system in recent years. The signal model of colocated TDM-MIMO radar with few transmitter or receiver channels is depicted and “time varying steering vector” model is proved. Inspired by sparse representations theory, we present a new processing scheme for joint DFS and DOA estimation based on the new input signal model of colocated TDM-MIMO radar. An ultracomplete redundancy dictionary for angle-frequency space is founded in order to complete sparse representations of the input signal. The SVD-SR algorithm which stands for joint estimation based on sparse representations using SVD decomposition with OMP algorithm and the improved M-FOCUSS algorithm which combines the classical M-FOCUSS with joint sparse recovery spectrum are applied to the new signal model’s calculation to solve the multiple measurement vectors (MMV) problem. The improved M-FOCUSS algorithm can work more robust than SVD-SR and JS-SR algorithms in the aspects of coherent signals resolution and estimation accuracy. Finally, simulation experiments have shown that the proposed algorithms and schemes are feasible and can be further applied to practical application
    • …
    corecore