1,329 research outputs found
Network Plasticity as Bayesian Inference
General results from statistical learning theory suggest to understand not
only brain computations, but also brain plasticity as probabilistic inference.
But a model for that has been missing. We propose that inherently stochastic
features of synaptic plasticity and spine motility enable cortical networks of
neurons to carry out probabilistic inference by sampling from a posterior
distribution of network configurations. This model provides a viable
alternative to existing models that propose convergence of parameters to
maximum likelihood values. It explains how priors on weight distributions and
connection probabilities can be merged optimally with learned experience, how
cortical networks can generalize learned information so well to novel
experiences, and how they can compensate continuously for unforeseen
disturbances of the network. The resulting new theory of network plasticity
explains from a functional perspective a number of experimental data on
stochastic aspects of synaptic plasticity that previously appeared to be quite
puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web
page http://www.igi.tugraz.at/kappe
A Heterosynaptic Learning Rule for Neural Networks
In this article we intoduce a novel stochastic Hebb-like learning rule for
neural networks that is neurobiologically motivated. This learning rule
combines features of unsupervised (Hebbian) and supervised (reinforcement)
learning and is stochastic with respect to the selection of the time points
when a synapse is modified. Moreover, the learning rule does not only affect
the synapse between pre- and postsynaptic neuron, which is called homosynaptic
plasticity, but effects also further remote synapses of the pre- and
postsynaptic neuron. This more complex form of synaptic plasticity has recently
come under investigations in neurobiology and is called heterosynaptic
plasticity. We demonstrate that this learning rule is useful in training neural
networks by learning parity functions including the exclusive-or (XOR) mapping
in a multilayer feed-forward network. We find, that our stochastic learning
rule works well, even in the presence of noise. Importantly, the mean learning
time increases with the number of patterns to be learned polynomially,
indicating efficient learning.Comment: 19 page
Six networks on a universal neuromorphic computing substrate
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
Death and rebirth of neural activity in sparse inhibitory networks
In this paper, we clarify the mechanisms underlying a general phenomenon
present in pulse-coupled heterogeneous inhibitory networks: inhibition can
induce not only suppression of the neural activity, as expected, but it can
also promote neural reactivation. In particular, for globally coupled systems,
the number of firing neurons monotonically reduces upon increasing the strength
of inhibition (neurons' death). However, the random pruning of the connections
is able to reverse the action of inhibition, i.e. in a sparse network a
sufficiently strong synaptic strength can surprisingly promote, rather than
depress, the activity of the neurons (neurons' rebirth). Thus the number of
firing neurons reveals a minimum at some intermediate synaptic strength. We
show that this minimum signals a transition from a regime dominated by the
neurons with higher firing activity to a phase where all neurons are
effectively sub-threshold and their irregular firing is driven by current
fluctuations. We explain the origin of the transition by deriving an analytic
mean field formulation of the problem able to provide the fraction of active
neurons as well as the first two moments of their firing statistics. The
introduction of a synaptic time scale does not modify the main aspects of the
reported phenomenon. However, for sufficiently slow synapses the transition
becomes dramatic, the system passes from a perfectly regular evolution to an
irregular bursting dynamics. In this latter regime the model provides
predictions consistent with experimental findings for a specific class of
neurons, namely the medium spiny neurons in the striatum.Comment: 19 pages, 10 figures, submitted to NJ
- …