1,585 research outputs found
Correction: Persistent Activity in Neural Networks with Dynamic Synapses
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Neuromorphic chips embody computational principles operating in the nervous
system, into microelectronic devices. In this domain it is important to
identify computational primitives that theory and experiments suggest as
generic and reusable cognitive elements. One such element is provided by
attractor dynamics in recurrent networks. Point attractors are equilibrium
states of the dynamics (up to fluctuations), determined by the synaptic
structure of the network; a `basin' of attraction comprises all initial states
leading to a given attractor upon relaxation, hence making attractor dynamics
suitable to implement robust associative memory. The initial network state is
dictated by the stimulus, and relaxation to the attractor state implements the
retrieval of the corresponding memorized prototypical pattern. In a previous
work we demonstrated that a neuromorphic recurrent network of spiking neurons
and suitably chosen, fixed synapses supports attractor dynamics. Here we focus
on learning: activating on-chip synaptic plasticity and using a theory-driven
strategy for choosing network parameters, we show that autonomous learning,
following repeated presentation of simple visual stimuli, shapes a synaptic
connectivity supporting stimulus-selective attractors. Associative memory
develops on chip as the result of the coupled stimulus-driven neural activity
and ensuing synaptic dynamics, with no artificial separation between learning
and retrieval phases.Comment: submitted to Scientific Repor
Mammalian Brain As a Network of Networks
Acknowledgements AZ, SG and AL acknowledge support from the Russian Science Foundation (16-12-00077). Authors thank T. Kuznetsova for Fig. 6.Peer reviewedPublisher PD
A three-threshold learning rule approaches the maximal capacity of recurrent neural networks
Understanding the theoretical foundations of how memories are encoded and
retrieved in neural populations is a central challenge in neuroscience. A
popular theoretical scenario for modeling memory function is the attractor
neural network scenario, whose prototype is the Hopfield model. The model has a
poor storage capacity, compared with the capacity achieved with perceptron
learning algorithms. Here, by transforming the perceptron learning rule, we
present an online learning rule for a recurrent neural network that achieves
near-maximal storage capacity without an explicit supervisory error signal,
relying only upon locally accessible information. The fully-connected network
consists of excitatory binary neurons with plastic recurrent connections and
non-plastic inhibitory feedback stabilizing the network dynamics; the memory
patterns are presented online as strong afferent currents, producing a bimodal
distribution for the neuron synaptic inputs. Synapses corresponding to active
inputs are modified as a function of the value of the local fields with respect
to three thresholds. Above the highest threshold, and below the lowest
threshold, no plasticity occurs. In between these two thresholds,
potentiation/depression occurs when the local field is above/below an
intermediate threshold. We simulated and analyzed a network of binary neurons
implementing this rule and measured its storage capacity for different sizes of
the basins of attraction. The storage capacity obtained through numerical
simulations is shown to be close to the value predicted by analytical
calculations. We also measured the dependence of capacity on the strength of
external inputs. Finally, we quantified the statistics of the resulting
synaptic connectivity matrix, and found that both the fraction of zero weight
synapses and the degree of symmetry of the weight matrix increase with the
number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog
- …