309 research outputs found

    A perturbative approach to non-linearities in the information carried by a two layer neural network

    Full text link
    We evaluate the mutual information between the input and the output of a two layer network in the case of a noisy and non-linear analogue channel. In the case where the non-linearity is small with respect to the variability in the noise, we derive an exact expression for the contribution to the mutual information given by the non-linear term in first order of perturbation theory. Finally we show how the calculation can be simplified by means of a diagrammatic expansion. Our results suggest that the use of perturbation theories applied to neural systems might give an insight on the contribution of non-linearities to the information transmission and in general to the neuronal dynamics.Comment: Accepted as a preprint of ICTP, Triest

    Search for uncharged faster than light particles

    Get PDF
    Searching for uncharged particles with spacelike four momentum traveling faster than ligh

    Symmetry considerations and development of pinwheels in visual maps

    Full text link
    Neurons in the visual cortex respond best to rod-like stimuli of given orientation. While the preferred orientation varies continuously across most of the cortex, there are prominent pinwheel centers around which all orientations a re present. Oriented segments abound in natural images, and tend to be collinear}; neurons are also more likely to be connected if their preferred orientations are aligned to their topographic separation. These are indications of a reduced symmetry requiring joint rotations of both orientation preference and the underl ying topography. We verify that this requirement extends to cortical maps of mo nkey and cat by direct statistical analysis. Furthermore, analytical arguments and numerical studies indicate that pinwheels are generically stable in evolving field models which couple orientation and topography

    Magnification Control in Winner Relaxing Neural Gas

    Get PDF
    An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically conf irming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.Comment: 14pages, 2 figure

    Computational Models of Adult Neurogenesis

    Full text link
    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of and adult brain. Here we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas.Comment: To appear Physica A, 7 page

    Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner

    Full text link
    The magnification behaviour of a generalized family of self-organizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the one-dimensional case, which can be obtained analytically. The Winner-Enhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. A numerical verification of the magnification law is included, and the ordering behaviour is analyzed. Compared to the original Self-Organizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.Comment: 6 pages, 5 figures. For an extended version refer to cond-mat/0208414 (Neural Computation 17, 996-1009

    Asymptotic Level Density of the Elastic Net Self-Organizing Feature Map

    Full text link
    Whileas the Kohonen Self Organizing Map shows an asymptotic level density following a power law with a magnification exponent 2/3, it would be desired to have an exponent 1 in order to provide optimal mapping in the sense of information theory. In this paper, we study analytically and numerically the magnification behaviour of the Elastic Net algorithm as a model for self-organizing feature maps. In contrast to the Kohonen map the Elastic Net shows no power law, but for onedimensional maps nevertheless the density follows an universal magnification law, i.e. depends on the local stimulus density only and is independent on position and decouples from the stimulus density at other positions.Comment: 8 pages, 10 figures. Link to publisher under http://link.springer.de/link/service/series/0558/bibs/2415/24150939.ht

    Investigation of topographical stability of the concave and convex Self-Organizing Map variant

    Get PDF
    We investigate, by a systematic numerical study, the parameter dependence of the stability of the Kohonen Self-Organizing Map and the Zheng and Greenleaf concave and convex learning with respect to different input distributions, input and output dimensions

    Infotropism as the underlying principle of perceptual organization

    Get PDF
    Whether perceptual organization favors the simplest or most likely interpretation of a distal stimulus has long been debated. An unbridgeable gulf has seemed to separate these, the Gestalt and Helmholtzian viewpoints. But in recent decades, the proposal that likelihood and simplicity are two sides of the same coin has been gaining ground, to the extent that their equivalence is now widely assumed. What then arises is a desire to know whether the two principles can be reduced to one. Applying Occam's Razor in this way is particularly desirable given that, as things stand, an account referencing one principle alone cannot be completely satisfactory. The present paper argues that unification of the two principles is possible, and that it can be achieved in terms of an incremental notion of `information seeking' (infotropism). Perceptual processing that is infotropic can be shown to target both simplicity and likelihood. The ability to see perceptual organization as governed by either objective can then be explained in terms of it being an infotropic process. Infotropism can be identified as the principle which underlies, and thus generalizes the principles of likelihood and simplicity
    corecore