318 research outputs found
Disappearance of Spurious States in Analog Associative Memories
We show that symmetric n-mixture states, when they exist, are almost never
stable in autoassociative networks with threshold-linear units. Only with a
binary coding scheme we could find a limited region of the parameter space in
which either 2-mixtures or 3-mixtures are stable attractors of the dynamics.Comment: 5 pages, 3 figures, accepted for publication in Phys Rev
How informative are spatial CA3 representations established by the dentate gyrus?
In the mammalian hippocampus, the dentate gyrus (DG) is characterized by
sparse and powerful unidirectional projections to CA3 pyramidal cells, the
so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of
the information they convey, what CA3 cells already receive from entorhinal
cortex layer II cells, which project both to the dentate gyrus and to CA3.
Computational models of episodic memory have hypothesized that the function of
the mossy fibers is to enforce a new, well separated pattern of activity onto
CA3 cells, to represent a new memory, prevailing over the interference produced
by the traces of older memories already stored on CA3 recurrent collateral
connections. Can this hypothesis apply also to spatial representations, as
described by recent neurophysiological recordings in rats? To address this
issue quantitatively, we estimate the amount of information DG can impart on a
new CA3 pattern of spatial activity, using both mathematical analysis and
computer simulations of a simplified model. We confirm that, also in the
spatial case, the observed sparse connectivity and level of activity are most
appropriate for driving memory storage and not to initiate retrieval.
Surprisingly, the model also indicates that even when DG codes just for space,
much of the information it passes on to CA3 acquires a non-spatial and episodic
character, akin to that of a random number generator. It is suggested that
further hippocampal processing is required to make full spatial use of DG
inputs.Comment: 19 pages, 11 figures, 1 table, submitte
Localized activity profiles and storage capacity of rate-based autoassociative networks
We study analytically the effect of metrically structured connectivity on the
behavior of autoassociative networks. We focus on three simple rate-based model
neurons: threshold-linear, binary or smoothly saturating units. For a
connectivity which is short range enough the threshold-linear network shows
localized retrieval states. The saturating and binary models also exhibit
spatially modulated retrieval states if the highest activity level that they
can achieve is above the maximum activity of the units in the stored patterns.
In the zero quenched noise limit, we derive an analytical formula for the
critical value of the connectivity width below which one observes spatially
non-uniform retrieval states. Localization reduces storage capacity, but only
by a factor of 2~3. The approach that we present here is generic in the sense
that there are no specific assumptions on the single unit input-output function
nor on the exact connectivity structure.Comment: 4 pages, 4 figure
Attractor neural networks storing multiple space representations: a model for hippocampal place fields
A recurrent neural network model storing multiple spatial maps, or
``charts'', is analyzed. A network of this type has been suggested as a model
for the origin of place cells in the hippocampus of rodents. The extremely
diluted and fully connected limits are studied, and the storage capacity and
the information capacity are found. The important parameters determining the
performance of the network are the sparsity of the spatial representations and
the degree of connectivity, as found already for the storage of individual
memory patterns in the general theory of auto-associative networks. Such
results suggest a quantitative parallel between theories of hippocampal
function in different animal species, such as primates (episodic memory) and
rodents (memory for space).Comment: 19 RevTeX pages, 8 pes figure
Stability of the replica symmetric solution for the information conveyed by by a neural network
The information that a pattern of firing in the output layer of a feedforward
network of threshold-linear neurons conveys about the network's inputs is
considered. A replica-symmetric solution is found to be stable for all but
small amounts of noise. The region of instability depends on the contribution
of the threshold and the sparseness: for distributed pattern distributions, the
unstable region extends to higher noise variances than for very sparse
distributions, for which it is almost nonexistant.Comment: 19 pages, LaTeX, 5 figures. Also available at
http://www.mrc-bbc.ox.ac.uk/~schultz/papers.html . Submitted to Phys. Rev. E
Minor change
Representational capacity of a set of independent neurons
The capacity with which a system of independent neuron-like units represents
a given set of stimuli is studied by calculating the mutual information between
the stimuli and the neural responses. Both discrete noiseless and continuous
noisy neurons are analyzed. In both cases, the information grows monotonically
with the number of neurons considered. Under the assumption that neurons are
independent, the mutual information rises linearly from zero, and approaches
exponentially its maximum value. We find the dependence of the initial slope on
the number of stimuli and on the sparseness of the representation.Comment: 19 pages, 6 figures, Phys. Rev. E, vol 63, 11910 - 11924 (2000
A theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions
In a recent study the initial rise of the mutual information between the
firing rates of N neurons and a set of p discrete stimuli has been analytically
evaluated, under the assumption that neurons fire independently of one another
to each stimulus and that each conditional distribution of firing rates is
gaussian. Yet real stimuli or behavioural correlates are high-dimensional, with
both discrete and continuously varying features.Moreover, the gaussian
approximation implies negative firing rates, which is biologically implausible.
Here, we generalize the analysis to the case where the stimulus or behavioural
correlate has both a discrete and a continuous dimension. In the case of large
noise we evaluate the mutual information up to the quadratic approximation as a
function of population size. Then we consider a more realistic distribution of
firing rates, truncated at zero, and we prove that the resulting correction,
with respect to the gaussian firing rates, can be expressed simply as a
renormalization of the noise parameter. Finally, we demonstrate the effect of
averaging the distribution across the discrete dimension, evaluating the mutual
information only with respect to the continuously varying correlate.Comment: 20 pages, 10 figure
Estimating probabilities from experimental frequencies
Estimating the probability distribution 'q' governing the behaviour of a
certain variable by sampling its value a finite number of times most typically
involves an error. Successive measurements allow the construction of a
histogram, or frequency count 'f', of each of the possible outcomes. In this
work, the probability that the true distribution be 'q', given that the
frequency count 'f' was sampled, is studied. Such a probability may be written
as a Gibbs distribution. A thermodynamic potential, which allows an easy
evaluation of the mean Kullback-Leibler divergence between the true and
measured distribution, is defined. For a large number of samples, the
expectation value of any function of 'q' is expanded in powers of the inverse
number of samples. As an example, the moments, the entropy and the mutual
information are analyzed.Comment: 10 pages, 3 figures, to be published in Physical Review
Replica symmetric evaluation of the information transfer in a two-layer network in presence of continuous+discrete stimuli
In a previous report we have evaluated analytically the mutual information
between the firing rates of N independent units and a set of multi-dimensional
continuous+discrete stimuli, for a finite population size and in the limit of
large noise. Here, we extend the analysis to the case of two interconnected
populations, where input units activate output ones via gaussian weights and a
threshold linear transfer function. We evaluate the information carried by a
population of M output units, again about continuous+discrete correlates. The
mutual information is evaluated solving saddle point equations under the
assumption of replica symmetry, a method which, by taking into account only the
term linear in N of the input information, is equivalent to assuming the noise
to be large. Within this limitation, we analyze the dependence of the
information on the ratio M/N, on the selectivity of the input units and on the
level of the output noise. We show analytically, and confirm numerically, that
in the limit of a linear transfer function and of a small ratio between output
and input noise, the output information approaches asymptotically the
information carried in input. Finally, we show that the information loss in
output does not depend much on the structure of the stimulus, whether purely
continuous, purely discrete or mixed, but only on the position of the threshold
nonlinearity, and on the ratio between input and output noise.Comment: 19 pages, 4 figure
- …
