37 research outputs found
An Online Unsupervised Structural Plasticity Algorithm for Spiking Neural Networks
In this article, we propose a novel Winner-Take-All (WTA) architecture
employing neurons with nonlinear dendrites and an online unsupervised
structural plasticity rule for training it. Further, to aid hardware
implementations, our network employs only binary synapses. The proposed
learning rule is inspired by spike time dependent plasticity (STDP) but differs
for each dendrite based on its activation level. It trains the WTA network
through formation and elimination of connections between inputs and synapses.
To demonstrate the performance of the proposed network and learning rule, we
employ it to solve two, four and six class classification of random Poisson
spike time inputs. The results indicate that by proper tuning of the inhibitory
time constant of the WTA, a trade-off between specificity and sensitivity of
the network can be achieved. We use the inhibitory time constant to set the
number of subpatterns per pattern we want to detect. We show that while the
percentage of successful trials are 92%, 88% and 82% for two, four and six
class classification when no pattern subdivisions are made, it increases to
100% when each pattern is subdivided into 5 or 10 subpatterns. However, the
former scenario of no pattern subdivision is more jitter resilient than the
later ones.Comment: 11 pages, 10 figures, journa
Network Plasticity as Bayesian Inference
General results from statistical learning theory suggest to understand not
only brain computations, but also brain plasticity as probabilistic inference.
But a model for that has been missing. We propose that inherently stochastic
features of synaptic plasticity and spine motility enable cortical networks of
neurons to carry out probabilistic inference by sampling from a posterior
distribution of network configurations. This model provides a viable
alternative to existing models that propose convergence of parameters to
maximum likelihood values. It explains how priors on weight distributions and
connection probabilities can be merged optimally with learned experience, how
cortical networks can generalize learned information so well to novel
experiences, and how they can compensate continuously for unforeseen
disturbances of the network. The resulting new theory of network plasticity
explains from a functional perspective a number of experimental data on
stochastic aspects of synaptic plasticity that previously appeared to be quite
puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web
page http://www.igi.tugraz.at/kappe
Learning, Inference, and Replay of Hidden State Sequences in Recurrent Spiking Neural Networks
Learning to recognize, predict, and generate spatio-temporal patterns and sequences of spikes is a key feature of nervous systems, and essential for solving basic tasks like localization and navigation. How this can be done by a spiking network, however, remains an open question. Here we present a STDP-based framework extending a previous model [1], that can simultaneously learn to abstract hidden states from sensory inputs and learn transition probabilities [2] between these states in recurrent connection weights