Skip to main content
Article thumbnail
Location of Repository

Monte Carlo methods for adaptive sparse approximations of time-series

By Thomas Blumensath and Mike E. Davies

Abstract

This paper deals with adaptive sparse approximations of time-series. The work is based on a Bayesian specification of the shift-invariant sparse coding model. To learn approximations for a particular class of signals, two different learning strategies are discussed. The first method uses a gradient optimization technique commonly employed in sparse coding problems. <br/><br/>The other method is novel in this context and is based on a sampling estimate. To approximate the gradient in the first approach we compare two Monte Carlo estimation techniques, Gibbs sampling and a novel importance sampling method. The second approach is based on a direct sample estimate and uses an extension of the Gibbs sampler used with the first approach. Both approaches allow the specification of different prior distributions and we here introduce a novel mixture prior based on a modified Rayleigh distribution. Experiments demonstrate that all Gibbs sampler based methods show comparable performance. <br/><br/>The importance sampler was found to work nearly as well as the Gibbs sampler on smaller problems in terms of estimating the model parameters, however, the method performed substantially worse on estimating the sparse coefficients. For large problems we found that the combination of a subset selection heuristic with the Gibbs sampling approaches can outperform previous suggested methods. <br/><br/>In addition, the methods studied here are flexible and allow the incorporation of additional prior knowledge, such as the nonnegativity of the approximation coefficients, which was found to offer additional benefits where applicable

Year: 2007
OAI identifier: oai:eprints.soton.ac.uk:142527
Provided by: e-Prints Soton

Suggested articles

Citations

  1. (2005). A bayesian approach for blind separation of sparse sources,” doi
  2. (2005). A fast importance sampling algorithm for unsupervised learning of over-complete dictionaries,” doi
  3. (1999). A probabilistic framework for the adaptation and comparison of image codes,” doi
  4. (2001). A variational method for learning sparse and overcomplete representations,” doi
  5. (1994). Adaptive Nonlinear Approximations.
  6. (2003). Adaptive sparseness for supervised learning,” doi
  7. (1997). Approaches for Bayesian variable selection,”
  8. (1998). arinen, “Sparse code shrinkage: Denoising of nongaussian data by maximum likelihood estimation,” doi
  9. (1998). Atomic decomposition by basis pursuit,” doi
  10. (2001). Blind source separation by sparse decomposition in a signal dictionary,” doi
  11. (1999). Coding time-varying signals using sparse, shift-invariant representations,”
  12. (2000). Computational and inferential difficulties with mixture posterior distributions,” doi
  13. (1999). Convex/ Schur-Convex (CSC) log-priors and sparse coding,”
  14. (2004). Correlation-based decomposition of surface electromyograms at low contraction forces,” doi
  15. (2000). Dealing with label-switching in mixture models,” doi
  16. (2003). Dictionary learning algorithms for sparse representation,” doi
  17. (2006). Efficient auditory coding,” doi
  18. (2005). Efficient coding of time-relative structure using spikes,” doi
  19. (1999). Emergence of movement-sensitive neurons’ properties by learning a sparse code of natural moving images,”
  20. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” doi
  21. (2005). Enforcing sparsity, shift-invariance and positivity in a Bayesian model of polyphonic music,” doi
  22. (1994). Estimation of finite mixture distributions through Bayesian sampling,”
  23. (2001). Independent Component Analysis. doi
  24. (1998). Infering sparse, overcomplete image codes using an efficient coding framework,”
  25. (2000). Learning overcomplete representations,” doi
  26. (2000). Learning sparse codes with a mixtureof-gaussians prior,”
  27. (2003). Learning sparse multiscale image representations,” doi
  28. Low rate and flexible image coding with redundant representations,” to appear in doi
  29. (2001). Markov chain Monte Carlo methods for computing Bayes factors: A comparative review,” doi
  30. (1993). Matching pursuits with time-frequency dictionaries,” doi
  31. (2001). Model selection by MCMC computation,” doi
  32. (1999). Monte Carlo Statistical Methods. Springer Texts in Statistics, doi
  33. (2006). MoTIF: An efficient algorithm for learning translation invariant dictionaries,” doi
  34. (1996). Nonparametric regression using Bayesian variable selection,” doi
  35. (1998). Quantized overcomplete expansions in RN: Analysis, synthesis and algorithms,” doi
  36. (2005). Shift-invariant sparse coding for single channel blind source separation,” doi
  37. (2006). Sparse and shift-invariant representations of music,” doi
  38. (2001). Sparse Bayesian learning and the relevance vector machine,” doi
  39. (2000). Sparse coding of time-varying natural images,” doi
  40. (2003). Sparse coding with invariance constraints,” in doi
  41. (2006). Sparse linear regression in unions of bases via Bayesian variable selection,” doi
  42. (2006). Sparse representations of polyphonic music,” doi
  43. (2003). Stochastic Approximation and Recursive Algorithms and Applications. doi
  44. (2003). Towards Music Perception by Redundancy Reduction and Unsupervised Learning in Probabilistic Models.
  45. (2003). Towards the Automated Analysis of Simple Polyphonic Music: A Knowledge-based Approach.
  46. (2001). Underdetermined blind source separation using sparse representations,” doi
  47. (1996). Variable selection and model comparison in regression,” in Bayesian Statistics
  48. (1993). Variable selection via Gibbs sampling,” doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.