2,629 research outputs found

    Perturbed Orthogonal Matching Pursuit

    Get PDF
    Cataloged from PDF version of article.Compressive Sensing theory details how a sparsely represented signal in a known basis can be reconstructed with an underdetermined linear measurement model. However, in reality there is a mismatch between the assumed and the actual bases due to factors such as discretization of the parameter space defining basis components, sampling jitter in A/D conversion, and model errors. Due to this mismatch, a signal may not be sparse in the assumed basis, which causes significant performance degradation in sparse reconstruction algorithms. To eliminate the mismatch problem, this paper presents a novel perturbed orthogonal matching pursuit (POMP) algorithm that performs controlled perturbation of selected support vectors to decrease the orthogonal residual at each iteration. Based on detailed mathematical analysis, conditions for successful reconstruction are derived. Simulations show that robust results with much smaller reconstruction errors in the case of perturbed bases can be obtained as compared to standard sparse reconstruction techniques

    Greed is Fine: on Finding Sparse Zeros of Hilbert Operators

    Get PDF
    We propose an generalization of the classical Orthogonal Matching Pursuit (OMP) algorithm for finding sparse zeros of Hilbert operator. First we introduce a new condition called the restricted diagonal deviation property which allow us to analysis of the consistency of the estimated support and vector. Secondly when using a perturbed version of the operator, we show that a partial recovery of the support is possible and remain possible even if some of the steps of the algorithm are inexact. Finally we discuss about the links between recent works on other version of OMP

    Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations

    Full text link
    Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery--- least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table

    Learning Active Basis Models by EM-Type Algorithms

    Full text link
    EM algorithm is a convenient tool for maximum likelihood model fitting when the data are incomplete or when there are latent variables or hidden states. In this review article we explain that EM algorithm is a natural computational scheme for learning image templates of object categories where the learning is not fully supervised. We represent an image template by an active basis model, which is a linear composition of a selected set of localized, elongated and oriented wavelet elements that are allowed to slightly perturb their locations and orientations to account for the deformations of object shapes. The model can be easily learned when the objects in the training images are of the same pose, and appear at the same location and scale. This is often called supervised learning. In the situation where the objects may appear at different unknown locations, orientations and scales in the training images, we have to incorporate the unknown locations, orientations and scales as latent variables into the image generation process, and learn the template by EM-type algorithms. The E-step imputes the unknown locations, orientations and scales based on the currently learned template. This step can be considered self-supervision, which involves using the current template to recognize the objects in the training images. The M-step then relearns the template based on the imputed locations, orientations and scales, and this is essentially the same as supervised learning. So the EM learning process iterates between recognition and supervised learning. We illustrate this scheme by several experiments.Comment: Published in at http://dx.doi.org/10.1214/09-STS281 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore