6,580 research outputs found

    On a theorem by Treves

    Full text link
    According to a theorem of Treves, the conserved functionals of the KdV equation vanish on each formal Laurent series 1/x^2 + u0 + u2 x^2 + u3 x^3 + >... . We propose a new, very simple geometrical proof for this statement.Comment: 7 page

    Production of gamma rays in the crab nebula pulsar

    Get PDF
    The gamma-ray flux at energies above 50 MeV from NP0532 is evaluated for a pulsar model in which the optical and X-radiation is produced by synchrotron effect. Synchrotron radiation and inverse Compton scattering are considered as production mechanisms of the gamma-rays. The theoretical estimates are compared with the experimental values

    Localized activity profiles and storage capacity of rate-based autoassociative networks

    Full text link
    We study analytically the effect of metrically structured connectivity on the behavior of autoassociative networks. We focus on three simple rate-based model neurons: threshold-linear, binary or smoothly saturating units. For a connectivity which is short range enough the threshold-linear network shows localized retrieval states. The saturating and binary models also exhibit spatially modulated retrieval states if the highest activity level that they can achieve is above the maximum activity of the units in the stored patterns. In the zero quenched noise limit, we derive an analytical formula for the critical value of the connectivity width below which one observes spatially non-uniform retrieval states. Localization reduces storage capacity, but only by a factor of 2~3. The approach that we present here is generic in the sense that there are no specific assumptions on the single unit input-output function nor on the exact connectivity structure.Comment: 4 pages, 4 figure

    How informative are spatial CA3 representations established by the dentate gyrus?

    Get PDF
    In the mammalian hippocampus, the dentate gyrus (DG) is characterized by sparse and powerful unidirectional projections to CA3 pyramidal cells, the so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of the information they convey, what CA3 cells already receive from entorhinal cortex layer II cells, which project both to the dentate gyrus and to CA3. Computational models of episodic memory have hypothesized that the function of the mossy fibers is to enforce a new, well separated pattern of activity onto CA3 cells, to represent a new memory, prevailing over the interference produced by the traces of older memories already stored on CA3 recurrent collateral connections. Can this hypothesis apply also to spatial representations, as described by recent neurophysiological recordings in rats? To address this issue quantitatively, we estimate the amount of information DG can impart on a new CA3 pattern of spatial activity, using both mathematical analysis and computer simulations of a simplified model. We confirm that, also in the spatial case, the observed sparse connectivity and level of activity are most appropriate for driving memory storage and not to initiate retrieval. Surprisingly, the model also indicates that even when DG codes just for space, much of the information it passes on to CA3 acquires a non-spatial and episodic character, akin to that of a random number generator. It is suggested that further hippocampal processing is required to make full spatial use of DG inputs.Comment: 19 pages, 11 figures, 1 table, submitte

    Disappearance of Spurious States in Analog Associative Memories

    Full text link
    We show that symmetric n-mixture states, when they exist, are almost never stable in autoassociative networks with threshold-linear units. Only with a binary coding scheme we could find a limited region of the parameter space in which either 2-mixtures or 3-mixtures are stable attractors of the dynamics.Comment: 5 pages, 3 figures, accepted for publication in Phys Rev

    A theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions

    Full text link
    In a recent study the initial rise of the mutual information between the firing rates of N neurons and a set of p discrete stimuli has been analytically evaluated, under the assumption that neurons fire independently of one another to each stimulus and that each conditional distribution of firing rates is gaussian. Yet real stimuli or behavioural correlates are high-dimensional, with both discrete and continuously varying features.Moreover, the gaussian approximation implies negative firing rates, which is biologically implausible. Here, we generalize the analysis to the case where the stimulus or behavioural correlate has both a discrete and a continuous dimension. In the case of large noise we evaluate the mutual information up to the quadratic approximation as a function of population size. Then we consider a more realistic distribution of firing rates, truncated at zero, and we prove that the resulting correction, with respect to the gaussian firing rates, can be expressed simply as a renormalization of the noise parameter. Finally, we demonstrate the effect of averaging the distribution across the discrete dimension, evaluating the mutual information only with respect to the continuously varying correlate.Comment: 20 pages, 10 figure

    Attractor neural networks storing multiple space representations: a model for hippocampal place fields

    Full text link
    A recurrent neural network model storing multiple spatial maps, or ``charts'', is analyzed. A network of this type has been suggested as a model for the origin of place cells in the hippocampus of rodents. The extremely diluted and fully connected limits are studied, and the storage capacity and the information capacity are found. The important parameters determining the performance of the network are the sparsity of the spatial representations and the degree of connectivity, as found already for the storage of individual memory patterns in the general theory of auto-associative networks. Such results suggest a quantitative parallel between theories of hippocampal function in different animal species, such as primates (episodic memory) and rodents (memory for space).Comment: 19 RevTeX pages, 8 pes figure

    Replica symmetric evaluation of the information transfer in a two-layer network in presence of continuous+discrete stimuli

    Full text link
    In a previous report we have evaluated analytically the mutual information between the firing rates of N independent units and a set of multi-dimensional continuous+discrete stimuli, for a finite population size and in the limit of large noise. Here, we extend the analysis to the case of two interconnected populations, where input units activate output ones via gaussian weights and a threshold linear transfer function. We evaluate the information carried by a population of M output units, again about continuous+discrete correlates. The mutual information is evaluated solving saddle point equations under the assumption of replica symmetry, a method which, by taking into account only the term linear in N of the input information, is equivalent to assuming the noise to be large. Within this limitation, we analyze the dependence of the information on the ratio M/N, on the selectivity of the input units and on the level of the output noise. We show analytically, and confirm numerically, that in the limit of a linear transfer function and of a small ratio between output and input noise, the output information approaches asymptotically the information carried in input. Finally, we show that the information loss in output does not depend much on the structure of the stimulus, whether purely continuous, purely discrete or mixed, but only on the position of the threshold nonlinearity, and on the ratio between input and output noise.Comment: 19 pages, 4 figure

    Representational capacity of a set of independent neurons

    Full text link
    The capacity with which a system of independent neuron-like units represents a given set of stimuli is studied by calculating the mutual information between the stimuli and the neural responses. Both discrete noiseless and continuous noisy neurons are analyzed. In both cases, the information grows monotonically with the number of neurons considered. Under the assumption that neurons are independent, the mutual information rises linearly from zero, and approaches exponentially its maximum value. We find the dependence of the initial slope on the number of stimuli and on the sparseness of the representation.Comment: 19 pages, 6 figures, Phys. Rev. E, vol 63, 11910 - 11924 (2000
    • …
    corecore