1,057 research outputs found
Stochastic trapping in a solvable model of on-line independent component analysis
Previous analytical studies of on-line Independent Component Analysis (ICA)
learning rules have focussed on asymptotic stability and efficiency. In
practice the transient stages of learning will often be more significant in
determining the success of an algorithm. This is demonstrated here with an
analysis of a Hebbian ICA algorithm which can find a small number of
non-Gaussian components given data composed of a linear mixture of independent
source signals. An idealised data model is considered in which the sources
comprise a number of non-Gaussian and Gaussian sources and a solution to the
dynamics is obtained in the limit where the number of Gaussian sources is
infinite. Previous stability results are confirmed by expanding around optimal
fixed points, where a closed form solution to the learning dynamics is
obtained. However, stochastic effects are shown to stabilise otherwise unstable
sub-optimal fixed points. Conditions required to destabilise one such fixed
point are obtained for the case of a single non-Gaussian component, indicating
that the initial learning rate \eta required to successfully escape is very low
(\eta = O(N^{-2}) where N is the data dimension) resulting in very slow
learning typically requiring O(N^3) iterations. Simulations confirm that this
picture holds for a finite system.Comment: 17 pages, 3 figures. To appear in Neural Computatio
Non-negative mixtures
This is the author's accepted pre-print of the article, first published as M. D. Plumbley, A. Cichocki and R. Bro. Non-negative mixtures. In P. Comon and C. Jutten (Ed), Handbook of Blind Source Separation: Independent Component Analysis and Applications. Chapter 13, pp. 515-547. Academic Press, Feb 2010. ISBN 978-0-12-374726-6 DOI: 10.1016/B978-0-12-374726-6.00018-7file: Proof:p\PlumbleyCichockiBro10-non-negative.pdf:PDF owner: markp timestamp: 2011.04.26file: Proof:p\PlumbleyCichockiBro10-non-negative.pdf:PDF owner: markp timestamp: 2011.04.2
Role of homeostasis in learning sparse representations
Neurons in the input layer of primary visual cortex in primates develop
edge-like receptive fields. One approach to understanding the emergence of this
response is to state that neural activity has to efficiently represent sensory
data with respect to the statistics of natural scenes. Furthermore, it is
believed that such an efficient coding is achieved using a competition across
neurons so as to generate a sparse representation, that is, where a relatively
small number of neurons are simultaneously active. Indeed, different models of
sparse coding, coupled with Hebbian learning and homeostasis, have been
proposed that successfully match the observed emergent response. However, the
specific role of homeostasis in learning such sparse representations is still
largely unknown. By quantitatively assessing the efficiency of the neural
representation during learning, we derive a cooperative homeostasis mechanism
that optimally tunes the competition between neurons within the sparse coding
algorithm. We apply this homeostasis while learning small patches taken from
natural images and compare its efficiency with state-of-the-art algorithms.
Results show that while different sparse coding algorithms give similar coding
results, the homeostasis provides an optimal balance for the representation of
natural images within the population of neurons. Competition in sparse coding
is optimized when it is fair. By contributing to optimizing statistical
competition across neurons, homeostasis is crucial in providing a more
efficient solution to the emergence of independent components
- …