89 research outputs found

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations

    Full text link
    Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery--- least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table

    Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing

    Full text link
    The random demodulator is a recent compressive sensing architecture providing efficient sub-Nyquist sampling of sparse band-limited signals. The compressive sensing paradigm requires an accurate model of the analog front-end to enable correct signal reconstruction in the digital domain. In practice, hardware devices such as filters deviate from their desired design behavior due to component variations. Existing reconstruction algorithms are sensitive to such deviations, which fall into the more general category of measurement matrix perturbations. This paper proposes a model-based technique that aims to calibrate filter model mismatches to facilitate improved signal reconstruction quality. The mismatch is considered to be an additive error in the discretized impulse response. We identify the error by sampling a known calibrating signal, enabling least-squares estimation of the impulse response error. The error estimate and the known system model are used to calibrate the measurement matrix. Numerical analysis demonstrates the effectiveness of the calibration method even for highly deviating low-pass filter responses. The proposed method performance is also compared to a state of the art method based on discrete Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal Processin

    Signal Recovery in Perturbed Fourier Compressed Sensing

    Full text link
    In many applications in compressed sensing, the measurement matrix is a Fourier matrix, i.e., it measures the Fourier transform of the underlying signal at some specified `base' frequencies {ui}i=1M\{u_i\}_{i=1}^M, where MM is the number of measurements. However due to system calibration errors, the system may measure the Fourier transform at frequencies {ui+δi}i=1M\{u_i + \delta_i\}_{i=1}^M that are different from the base frequencies and where {δi}i=1M\{\delta_i\}_{i=1}^M are unknown. Ignoring perturbations of this nature can lead to major errors in signal recovery. In this paper, we present a simple but effective alternating minimization algorithm to recover the perturbations in the frequencies \emph{in situ} with the signal, which we assume is sparse or compressible in some known basis. In many cases, the perturbations {δi}i=1M\{\delta_i\}_{i=1}^M can be expressed in terms of a small number of unique parameters PMP \ll M. We demonstrate that in such cases, the method leads to excellent quality results that are several times better than baseline algorithms (which are based on existing off-grid methods in the recent literature on direction of arrival (DOA) estimation, modified to suit the computational problem in this paper). Our results are also robust to noise in the measurement values. We also provide theoretical results for (1) the convergence of our algorithm, and (2) the uniqueness of its solution under some restrictions.Comment: New theortical results about uniqueness and convergence now included. More challenging experiments now include

    Perturbed Orthogonal Matching Pursuit

    Get PDF
    Cataloged from PDF version of article.Compressive Sensing theory details how a sparsely represented signal in a known basis can be reconstructed with an underdetermined linear measurement model. However, in reality there is a mismatch between the assumed and the actual bases due to factors such as discretization of the parameter space defining basis components, sampling jitter in A/D conversion, and model errors. Due to this mismatch, a signal may not be sparse in the assumed basis, which causes significant performance degradation in sparse reconstruction algorithms. To eliminate the mismatch problem, this paper presents a novel perturbed orthogonal matching pursuit (POMP) algorithm that performs controlled perturbation of selected support vectors to decrease the orthogonal residual at each iteration. Based on detailed mathematical analysis, conditions for successful reconstruction are derived. Simulations show that robust results with much smaller reconstruction errors in the case of perturbed bases can be obtained as compared to standard sparse reconstruction techniques
    corecore