Skip to main content
Article thumbnail
Location of Repository

Learning sparse, overcomplete representations of time-varying natural images

By Bruno A. Olshausen

Abstract

I show how to adapt an overcomplete dictionary of spacetime functions so as to represent time-varying natural images with maximum sparsity. The basis functions are considered as part of a probabilistic model of image sequences, with a sparse prior imposed over the coefficients. Learning is accomplished by maximizing the log-likelihood of the model, using natural movies as training data. The basis functions that emerge are space-time inseparable functions that resemble the motion-selective receptive fields of simple-cells in mammalian visual cortex. When the coefficients are computed via matching-pursuit in space and time, one obtains a punctate, spike-like representation of continuous time-varying images. It is suggested that such a coding scheme may be at work in the visual cortex. 1

Year: 2003
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.4448
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://redwood.berkeley.edu/br... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.