Location of Repository

Normalised iterative hard thresholding: guaranteed stability and performance

By Thomas Blumensath and Mike E. Davies

Abstract

Sparse signal models are used in many signal processing applications. The task of estimating the sparsest coefficient vector in these models is a combinatorial problem and efficient, often suboptimal strategies have to be used. Fortunately, under certain conditions on the model, several algorithms could be shown to efficiently calculate near-optimal solutions. In this paper, we study one of these methods, the so-called Iterative Hard Thresholding algorithm. While this method has strong theoretical performance guarantees whenever certain theoretical properties hold, empirical studies show that the algorithm's performance degrades significantly, whenever the conditions fail. What is more, in this regime, the algorithm also often fails to converge. As we are here interested in the application of the method to real world problems, in which it is not known in general, whether the theoretical conditions are satisfied or not, we suggest a simple modification that guarantees the convergence of the method, even in this regime. With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance. What is more, the modified algorithm retains theoretical performance guarantees similar to the original algorithm. <br/

Year: 2010
OAI identifier: oai:eprints.soton.ac.uk:142499
Provided by: e-Prints Soton

Suggested articles

Preview

Citations

  1. (2004). A simple mixture model for sparse overcomplete ICA,” doi
  2. (2009). A simple, efficient and near optimal algorithm for compressed sensing,” doi
  3. (2008). An introduction to compressive sampling,” doi
  4. (2008). COSAMP: Iterative signal recovery from incomplete and inaccurate samples.,” to appear in Applied Computational Harmonic Analysis, doi
  5. (2004). Decoding by linear programming,”
  6. (2008). es, “The restricted isometry property and its implications for compressed sensing,” doi
  7. (2008). Gradient pursuits,” doi
  8. (2009). How to use the iterative hard thresholding algorithm,” in doi
  9. (2008). Instance optimal decoding by thresholding in compressed sensing,” preprint, doi
  10. (2009). Iterative hard thresholding for compressed sensing,” to appear in Applied and Computational Harmonic Analysis, doi
  11. (2003). Iterative image coding with overcomplete complex wavelet transforms,” in doi
  12. (2008). Iterative thresholding for sparse approximations,” doi
  13. (1999). Principles of Magnetic Resonance Imaging: a Signal Processing Perspective, doi
  14. (2008). Probing the Pareto frontier for basis pursuit solutions,” doi
  15. (1998). Quantized overcomplete expansions in RN: Analysis, synthesis and algorithms,” doi
  16. (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.,” doi
  17. (2006). Sparse and shift-invariant representations of music,” doi
  18. (2006). Sparse linear regression in unions of bases via bayesian variable selection,” doi
  19. (2008). Sparsest solutions of underdetermined linear systems via ell-q minimization for 0 doi
  20. (2006). Stable signal recovery from incomplete and inaccurate measurements,” doi
  21. (2008). Subspace pursuit for compressed sensing: Closing the gap between performance and complexity,” submitted,
  22. (2008). Volkan Cevher, Marco Duarte, and Chinmay Hegde, “Model-based compressive sensing,” preprint, doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.