2,100 research outputs found

    Iterative Reweighted Algorithms for Sparse Signal Recovery with Temporally Correlated Source Vectors

    Full text link
    Iterative reweighted algorithms, as a class of algorithms for sparse signal recovery, have been found to have better performance than their non-reweighted counterparts. However, for solving the problem of multiple measurement vectors (MMVs), all the existing reweighted algorithms do not account for temporal correlation among source vectors and thus their performance degrades significantly in the presence of correlation. In this work we propose an iterative reweighted sparse Bayesian learning (SBL) algorithm exploiting the temporal correlation, and motivated by it, we propose a strategy to improve existing reweighted 2\ell_2 algorithms for the MMV problem, i.e. replacing their row norms with Mahalanobis distance measure. Simulations show that the proposed reweighted SBL algorithm has superior performance, and the proposed improvement strategy is effective for existing reweighted 2\ell_2 algorithms.Comment: Accepted by ICASSP 201

    Compressive Source Separation: Theory and Methods for Hyperspectral Imaging

    Get PDF
    With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.Comment: 32 page

    On the Sample Complexity of Multichannel Frequency Estimation via Convex Optimization

    Full text link
    The use of multichannel data in line spectral estimation (or frequency estimation) is common for improving the estimation accuracy in array processing, structural health monitoring, wireless communications, and more. Recently proposed atomic norm methods have attracted considerable attention due to their provable superiority in accuracy, flexibility and robustness compared with conventional approaches. In this paper, we analyze atomic norm minimization for multichannel frequency estimation from noiseless compressive data, showing that the sample size per channel that ensures exact estimation decreases with the increase of the number of channels under mild conditions. In particular, given LL channels, order K(logK)(1+1LlogN)K\left(\log K\right) \left(1+\frac{1}{L}\log N\right) samples per channel, selected randomly from NN equispaced samples, suffice to ensure with high probability exact estimation of KK frequencies that are normalized and mutually separated by at least 4N\frac{4}{N}. Numerical results are provided corroborating our analysis.Comment: 14 pages, double column, to appear in IEEE Trans. Information Theor

    Frequency-modulated continuous-wave LiDAR compressive depth-mapping

    Get PDF
    We present an inexpensive architecture for converting a frequency-modulated continuous-wave LiDAR system into a compressive-sensing based depth-mapping camera. Instead of raster scanning to obtain depth-maps, compressive sensing is used to significantly reduce the number of measurements. Ideally, our approach requires two difference detectors. % but can operate with only one at the cost of doubling the number of measurments. Due to the large flux entering the detectors, the signal amplification from heterodyne detection, and the effects of background subtraction from compressive sensing, the system can obtain higher signal-to-noise ratios over detector-array based schemes while scanning a scene faster than is possible through raster-scanning. %Moreover, we show how a single total-variation minimization and two fast least-squares minimizations, instead of a single complex nonlinear minimization, can efficiently recover high-resolution depth-maps with minimal computational overhead. Moreover, by efficiently storing only 2m2m data points from m<nm<n measurements of an nn pixel scene, we can easily extract depths by solving only two linear equations with efficient convex-optimization methods

    Self-Calibration and Biconvex Compressive Sensing

    Full text link
    The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where both x and the diagonal matrix D (which models the calibration error) are unknown. By "lifting" this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis
    corecore