26,289 research outputs found

    Block-Sparse Recovery via Convex Optimization

    Full text link
    Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal/image processing and computer vision applications, such as face recognition, we consider the block-sparse recovery problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a block-sparse representation of a signal, we propose two classes of non-convex optimization programs, which aim to minimize the number of nonzero coefficient blocks and the number of nonzero reconstructed vectors from the blocks, respectively. Since both classes of problems are NP-hard, we propose convex relaxations and derive conditions under which each class of the convex programs is equivalent to the original non-convex formulation. Our conditions depend on the notions of mutual and cumulative subspace coherence of a dictionary, which are natural generalizations of existing notions of mutual and cumulative coherence. We evaluate the performance of the proposed convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem improves the state-of-the-art results by 10% with only 25% of the training data.Comment: IEEE Transactions on Signal Processin

    Greed is good: algorithmic results for sparse approximation

    Get PDF
    This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms

    A Multiple Hypothesis Testing Approach to Low-Complexity Subspace Unmixing

    Full text link
    Subspace-based signal processing traditionally focuses on problems involving a few subspaces. Recently, a number of problems in different application areas have emerged that involve a significantly larger number of subspaces relative to the ambient dimension. It becomes imperative in such settings to first identify a smaller set of active subspaces that contribute to the observation before further processing can be carried out. This problem of identification of a small set of active subspaces among a huge collection of subspaces from a single (noisy) observation in the ambient space is termed subspace unmixing. This paper formally poses the subspace unmixing problem under the parsimonious subspace-sum (PS3) model, discusses connections of the PS3 model to problems in wireless communications, hyperspectral imaging, high-dimensional statistics and compressed sensing, and proposes a low-complexity algorithm, termed marginal subspace detection (MSD), for subspace unmixing. The MSD algorithm turns the subspace unmixing problem for the PS3 model into a multiple hypothesis testing (MHT) problem and its analysis in the paper helps control the family-wise error rate of this MHT problem at any level α∈[0,1]\alpha \in [0,1] under two random signal generation models. Some other highlights of the analysis of the MSD algorithm include: (i) it is applicable to an arbitrary collection of subspaces on the Grassmann manifold; (ii) it relies on properties of the collection of subspaces that are computable in polynomial time; and (iiiiii) it allows for linear scaling of the number of active subspaces as a function of the ambient dimension. Finally, numerical results are presented in the paper to better understand the performance of the MSD algorithm.Comment: Submitted for journal publication; 33 pages, 14 figure

    Learning incoherent dictionaries for sparse approximation using iterative projections and rotations

    Get PDF
    This work was supported by the Queen Mary University of London School Studentship, the EU FET-Open project FP7- ICT-225913-SMALL. Sparse Models, Algorithms and Learning for Large-scale data and a Leadership Fellowship from the UK Engineering and Physical Sciences Research Council (EPSRC)

    Global optimization methods for localization in compressive sensing

    Get PDF
    The dissertation discusses compressive sensing and its applications to localization in multiple-input multiple-output (MIMO) radars. Compressive sensing is a paradigm at the intersection between signal processing and optimization. It advocates the sensing of “sparse” signals (i.e., represented using just a few terms from a basis expansion) by using a sampling rate much lower than that required by the Nyquist-Shannon sampling theorem (i.e., twice the highest frequency present in the signal of interest). Low-rate sampling reduces implementation’s constraints and translates into cost savings due to fewer measurements required. This is particularly true in localization applications when the number of measurements is commensurate to antenna elements. The theory of compressive sensing provides precise guidance on how the measurements should be acquired, and which optimization algorithm should be used for signal recovery. The first part of the dissertation addresses the application of compressive sensing for localization in the spatial domain, specifically direction of arrival (DOA), using MIMO radar. A sparse localization framework is proposed for a MIMO array in which transmit and receive elements are placed at random. This allows for a dramatic reduction in the number of elements needed, while still attaining performance comparable to that of a filled (Nyquist) array. By leveraging properties of structured random matrices, a bound on the coherence of the resulting measurement matrix is obtained, and conditions under which the measurement matrix satisfies the so-called isotropy property are detailed. The coherence and isotropy concepts are used to establish uniform and non-uniform recovery guarantees within the proposed spatial compressive sensing framework. In particular, it is shown that non-uniform recovery is guaranteed if the product of the number of transmit and receive elements, MN (which is also the number of degrees of freedom), scales with K (log G)2, where K is the number of targets and G is proportional to the array aperture and determines the angle resolution. In contrast with a filled virtual MIMO array where the product MN scales linearly with G, the logarithmic dependence on G in the proposed framework supports the high-resolution provided by the virtual array aperture while using a small number of MIMO radar elements. The second part of the dissertation focuses on the sparse recovery problem at the heart of compressive sensing. An algorithm, dubbed Multi-Branch Matching Pursuit (MBMP), is presented which combines three different paradigms: being a greedy method, it performs iterative signal support estimation; as a rank-aware method, it is able to exploit signal subspace information when multiple snapshots are available; and, as its name foretells, it possesses a multi-branch structure which allows it to trade-off performance (e.g., measurements) for computational complexity. A sufficient condition under which MBMP can recover a sparse signal is obtained. This condition, named MB-coherence, is met when the columns of the measurement matrix are sufficiently “incoherent” and when the signal-to-noise ratio is sufficiently high. The condition shows that successful recovery with MBMP is guaranteed for dictionaries which do not satisfy previously known conditions (e.g., coherence, cumulative coherence, or the Hanman relaxed coherence). Finally, by leveraging the MBMP algorithm, a framework for target detection from a set of compressive sensing radar measurements is established. The proposed framework does not require any prior information about the targets’ scene, and it is competitive with respect to state-of-the-art detection compressive sensing algorithms

    Compressed sensing for wide-field radio interferometric imaging

    Full text link
    For the next generation of radio interferometric telescopes it is of paramount importance to incorporate wide field-of-view (WFOV) considerations in interferometric imaging, otherwise the fidelity of reconstructed images will suffer greatly. We extend compressed sensing techniques for interferometric imaging to a WFOV and recover images in the spherical coordinate space in which they naturally live, eliminating any distorting projection. The effectiveness of the spread spectrum phenomenon, highlighted recently by one of the authors, is enhanced when going to a WFOV, while sparsity is promoted by recovering images directly on the sphere. Both of these properties act to improve the quality of reconstructed interferometric images. We quantify the performance of compressed sensing reconstruction techniques through simulations, highlighting the superior reconstruction quality achieved by recovering interferometric images directly on the sphere rather than the plane.Comment: 15 pages, 8 figures, replaced to match version accepted by MNRA

    Sparse and spurious: dictionary learning with noise and outliers

    Get PDF
    A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries, noisy signals, and possible outliers, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.Comment: This is a substantially revised version of a first draft that appeared as a preprint titled "Local stability and robustness of sparse dictionary learning in the presence of noise", http://hal.inria.fr/hal-00737152, IEEE Transactions on Information Theory, Institute of Electrical and Electronics Engineers (IEEE), 2015, pp.2
    • 

    corecore