86,938 research outputs found
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Associative memory on a small-world neural network
We study a model of associative memory based on a neural network with
small-world structure. The efficacy of the network to retrieve one of the
stored patterns exhibits a phase transition at a finite value of the disorder.
The more ordered networks are unable to recover the patterns, and are always
attracted to mixture states. Besides, for a range of the number of stored
patterns, the efficacy has a maximum at an intermediate value of the disorder.
We also give a statistical characterization of the attractors for all values of
the disorder of the network.Comment: 5 pages, 4 figures (eps
The chronotron: a neuron that learns to fire temporally-precise spike patterns
In many cases, neurons process information carried by the precise timing of spikes. Here we show how neurons can learn to generate specific temporally-precise output spikes in response to input spike patterns, thus processing and memorizing information that is fully temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that is analytically-derived and highly efficient, and one that has a high degree of biological plausibility. We show how chronotrons can learn to classify their inputs and we study their memory capacity
On the Inability of Markov Models to Capture Criticality in Human Mobility
We examine the non-Markovian nature of human mobility by exposing the
inability of Markov models to capture criticality in human mobility. In
particular, the assumed Markovian nature of mobility was used to establish a
theoretical upper bound on the predictability of human mobility (expressed as a
minimum error probability limit), based on temporally correlated entropy. Since
its inception, this bound has been widely used and empirically validated using
Markov chains. We show that recurrent-neural architectures can achieve
significantly higher predictability, surpassing this widely used upper bound.
In order to explain this anomaly, we shed light on several underlying
assumptions in previous research works that has resulted in this bias. By
evaluating the mobility predictability on real-world datasets, we show that
human mobility exhibits scale-invariant long-range correlations, bearing
similarity to a power-law decay. This is in contrast to the initial assumption
that human mobility follows an exponential decay. This assumption of
exponential decay coupled with Lempel-Ziv compression in computing Fano's
inequality has led to an inaccurate estimation of the predictability upper
bound. We show that this approach inflates the entropy, consequently lowering
the upper bound on human mobility predictability. We finally highlight that
this approach tends to overlook long-range correlations in human mobility. This
explains why recurrent-neural architectures that are designed to handle
long-range structural correlations surpass the previously computed upper bound
on mobility predictability
Model-free reconstruction of neuronal network connectivity from calcium imaging signals
A systematic assessment of global neural network connectivity through direct
electrophysiological assays has remained technically unfeasible even in
dissociated neuronal cultures. We introduce an improved algorithmic approach
based on Transfer Entropy to reconstruct approximations to network structural
connectivities from network activity monitored through calcium fluorescence
imaging. Based on information theory, our method requires no prior assumptions
on the statistics of neuronal firing and neuronal connections. The performance
of our algorithm is benchmarked on surrogate time-series of calcium
fluorescence generated by the simulated dynamics of a network with known
ground-truth topology. We find that the effective network topology revealed by
Transfer Entropy depends qualitatively on the time-dependent dynamic state of
the network (e.g., bursting or non-bursting). We thus demonstrate how
conditioning with respect to the global mean activity improves the performance
of our method. [...] Compared to other reconstruction strategies such as
cross-correlation or Granger Causality methods, our method based on improved
Transfer Entropy is remarkably more accurate. In particular, it provides a good
reconstruction of the network clustering coefficient, allowing to discriminate
between weakly or strongly clustered topologies, whereas on the other hand an
approach based on cross-correlations would invariantly detect artificially high
levels of clustering. Finally, we present the applicability of our method to
real recordings of in vitro cortical cultures. We demonstrate that these
networks are characterized by an elevated level of clustering compared to a
random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted
for publicatio
Supervised Learning in Multilayer Spiking Neural Networks
The current article introduces a supervised learning algorithm for multilayer
spiking neural networks. The algorithm presented here overcomes some
limitations of existing learning algorithms as it can be applied to neurons
firing multiple spikes and it can in principle be applied to any linearisable
neuron model. The algorithm is applied successfully to various benchmarks, such
as the XOR problem and the Iris data set, as well as complex classifications
problems. The simulations also show the flexibility of this supervised learning
algorithm which permits different encodings of the spike timing patterns,
including precise spike trains encoding.Comment: 38 pages, 4 figure
Statistical Physics and Representations in Real and Artificial Neural Networks
This document presents the material of two lectures on statistical physics
and neural representations, delivered by one of us (R.M.) at the Fundamental
Problems in Statistical Physics XIV summer school in July 2017. In a first
part, we consider the neural representations of space (maps) in the
hippocampus. We introduce an extension of the Hopfield model, able to store
multiple spatial maps as continuous, finite-dimensional attractors. The phase
diagram and dynamical properties of the model are analyzed. We then show how
spatial representations can be dynamically decoded using an effective Ising
model capturing the correlation structure in the neural data, and compare
applications to data obtained from hippocampal multi-electrode recordings and
by (sub)sampling our attractor model. In a second part, we focus on the problem
of learning data representations in machine learning, in particular with
artificial neural networks. We start by introducing data representations
through some illustrations. We then analyze two important algorithms, Principal
Component Analysis and Restricted Boltzmann Machines, with tools from
statistical physics
- âŚ