2,488 research outputs found

    Signal Recovery in Perturbed Fourier Compressed Sensing

    Full text link
    In many applications in compressed sensing, the measurement matrix is a Fourier matrix, i.e., it measures the Fourier transform of the underlying signal at some specified `base' frequencies {ui}i=1M\{u_i\}_{i=1}^M, where MM is the number of measurements. However due to system calibration errors, the system may measure the Fourier transform at frequencies {ui+δi}i=1M\{u_i + \delta_i\}_{i=1}^M that are different from the base frequencies and where {δi}i=1M\{\delta_i\}_{i=1}^M are unknown. Ignoring perturbations of this nature can lead to major errors in signal recovery. In this paper, we present a simple but effective alternating minimization algorithm to recover the perturbations in the frequencies \emph{in situ} with the signal, which we assume is sparse or compressible in some known basis. In many cases, the perturbations {δi}i=1M\{\delta_i\}_{i=1}^M can be expressed in terms of a small number of unique parameters PMP \ll M. We demonstrate that in such cases, the method leads to excellent quality results that are several times better than baseline algorithms (which are based on existing off-grid methods in the recent literature on direction of arrival (DOA) estimation, modified to suit the computational problem in this paper). Our results are also robust to noise in the measurement values. We also provide theoretical results for (1) the convergence of our algorithm, and (2) the uniqueness of its solution under some restrictions.Comment: New theortical results about uniqueness and convergence now included. More challenging experiments now include

    Fast Stochastic Hierarchical Bayesian MAP for Tomographic Imaging

    Full text link
    Any image recovery algorithm attempts to achieve the highest quality reconstruction in a timely manner. The former can be achieved in several ways, among which are by incorporating Bayesian priors that exploit natural image tendencies to cue in on relevant phenomena. The Hierarchical Bayesian MAP (HB-MAP) is one such approach which is known to produce compelling results albeit at a substantial computational cost. We look to provide further analysis and insights into what makes the HB-MAP work. While retaining the proficient nature of HB-MAP's Type-I estimation, we propose a stochastic approximation-based approach to Type-II estimation. The resulting algorithm, fast stochastic HB-MAP (fsHBMAP), takes dramatically fewer operations while retaining high reconstruction quality. We employ our fsHBMAP scheme towards the problem of tomographic imaging and demonstrate that fsHBMAP furnishes promising results when compared to many competing methods.Comment: 5 Pages, 4 Figures, Conference (Accepted to Asilomar 2017

    Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations

    Full text link
    Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery--- least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table

    Computational Methods for Sparse Solution of Linear Inverse Problems

    Get PDF
    The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications

    Efficient Least-squares Migration with Sparsity Promotion

    Full text link
    corecore