15 research outputs found

    Nonparametric Simultaneous Sparse Recovery: an Application to Source Localization

    Full text link
    We consider multichannel sparse recovery problem where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors. Many popular greedy or convex algorithms perform poorly under non-Gaussian heavy-tailed noise conditions or in the face of outliers. In this paper, we propose the usage of mixed â„“p,q\ell_{p,q} norms on data fidelity (residual matrix) term and the conventional â„“0,2\ell_{0,2}-norm constraint on the signal matrix to promote row-sparsity. We devise a greedy pursuit algorithm based on simultaneous normalized iterative hard thresholding (SNIHT) algorithm. Simulation studies highlight the effectiveness of the proposed approaches to cope with different noise environments (i.i.d., row i.i.d, etc) and outliers. Usefulness of the methods are illustrated in source localization application with sensor arrays.Comment: Paper appears in Proc. European Signal Processing Conference (EUSIPCO'15), Nice, France, Aug 31 -- Sep 4, 201

    Multichannel sparse recovery of complex-valued signals using Huber's criterion

    Full text link
    In this paper, we generalize Huber's criterion to multichannel sparse recovery problem of complex-valued measurements where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors. This requires careful characterization of robust complex-valued loss functions as well as Huber's criterion function for the multivariate sparse regression problem. We devise a greedy algorithm based on simultaneous normalized iterative hard thresholding (SNIHT) algorithm. Unlike the conventional SNIHT method, our algorithm, referred to as HUB-SNIHT, is robust under heavy-tailed non-Gaussian noise conditions, yet has a negligible performance loss compared to SNIHT under Gaussian noise. Usefulness of the method is illustrated in source localization application with sensor arrays.Comment: To appear in CoSeRa'15 (Pisa, Italy, June 16-19, 2015). arXiv admin note: text overlap with arXiv:1502.0244

    Subspace Methods for Joint Sparse Recovery

    Full text link
    We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC.Comment: submitted to IEEE transactions on Information Theory, revised versio

    An Intelligent Grey Wolf Optimizer Algorithm for Distributed Compressed Sensing

    Get PDF
    Distributed Compressed Sensing (DCS) is an important research area of compressed sensing (CS). This paper aims at solving the Distributed Compressed Sensing (DCS) problem based on mixed support model. In solving this problem, the previous proposed greedy pursuit algorithms easily fall into suboptimal solutions. In this paper, an intelligent grey wolf optimizer (GWO) algorithm called DCS-GWO is proposed by combining GWO and q-thresholding algorithm. In DCS-GWO, the grey wolves’ positions are initialized by using the q-thresholding algorithm and updated by using the idea of GWO. Inheriting the global search ability of GWO, DCS-GWO is efficient in finding global optimum solution. The simulation results illustrate that DCS-GWO has better recovery performance than previous greedy pursuit algorithms at the expense of computational complexity
    corecore