5 research outputs found

    Stochastic trapping in a solvable model of on-line independent component analysis

    Full text link
    Previous analytical studies of on-line Independent Component Analysis (ICA) learning rules have focussed on asymptotic stability and efficiency. In practice the transient stages of learning will often be more significant in determining the success of an algorithm. This is demonstrated here with an analysis of a Hebbian ICA algorithm which can find a small number of non-Gaussian components given data composed of a linear mixture of independent source signals. An idealised data model is considered in which the sources comprise a number of non-Gaussian and Gaussian sources and a solution to the dynamics is obtained in the limit where the number of Gaussian sources is infinite. Previous stability results are confirmed by expanding around optimal fixed points, where a closed form solution to the learning dynamics is obtained. However, stochastic effects are shown to stabilise otherwise unstable sub-optimal fixed points. Conditions required to destabilise one such fixed point are obtained for the case of a single non-Gaussian component, indicating that the initial learning rate \eta required to successfully escape is very low (\eta = O(N^{-2}) where N is the data dimension) resulting in very slow learning typically requiring O(N^3) iterations. Simulations confirm that this picture holds for a finite system.Comment: 17 pages, 3 figures. To appear in Neural Computatio

    Shedding light on social learning

    Full text link
    Culture involves the origination and transmission of ideas, but the conditions in which culture can emerge and evolve are unclear. We constructed and studied a highly simplified neural-network model of these processes. In this model ideas originate by individual learning from the environment and are transmitted by communication between individuals. Individuals (or "agents") comprise a single neuron which receives structured data from the environment via plastic synaptic connections. The data are generated in the simplest possible way: linear mixing of independently fluctuating sources and the goal of learning is to unmix the data. To make this problem tractable we assume that at least one of the sources fluctuates in a nonGaussian manner. Linear mixing creates structure in the data, and agents attempt to learn (from the data and possibly from other individuals) synaptic weights that will unmix, i.e., to "understand" the agent's world. For a variety of reasons even this goal can be difficult for a single agent to achieve; we studied one particular type of difficulty (created by imperfection in synaptic plasticity), though our conclusions should carry over to many other types of difficulty. We previously studied whether a small population of communicating agents, learning from each other, could more easily learn unmixing coefficients than isolated individuals, learning only from their environment. We found, unsurprisingly, that if agents learn indiscriminately from any other agent (whether or not they have learned good solutions), communication does not enhance understanding. Here we extend the model slightly, by allowing successful learners to be more effective teachers, and find that now a population of agents can learn more effectively than isolated individuals. We suggest that a key factor in the onset of culture might be the development of selective learning.Comment: 11 pages 8 figure
    corecore