3,108 research outputs found
Network Plasticity as Bayesian Inference
General results from statistical learning theory suggest to understand not
only brain computations, but also brain plasticity as probabilistic inference.
But a model for that has been missing. We propose that inherently stochastic
features of synaptic plasticity and spine motility enable cortical networks of
neurons to carry out probabilistic inference by sampling from a posterior
distribution of network configurations. This model provides a viable
alternative to existing models that propose convergence of parameters to
maximum likelihood values. It explains how priors on weight distributions and
connection probabilities can be merged optimally with learned experience, how
cortical networks can generalize learned information so well to novel
experiences, and how they can compensate continuously for unforeseen
disturbances of the network. The resulting new theory of network plasticity
explains from a functional perspective a number of experimental data on
stochastic aspects of synaptic plasticity that previously appeared to be quite
puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web
page http://www.igi.tugraz.at/kappe
A dynamic system approach to spiking second order memristor networks
Second order memristors are two terminal devices that present a conductance depending on two orders of variables, namely the geometric parameters and the internal temperature. They have shown to be able to mimic some specific features of neuron synapses, specifically Spike-Timing-Dependent-Plasticity (STDP), and consequently to be good candidates for neuromor- phic computing. In particular, memristor crossbar structures appear to be suitable for implementing locally competitive algorithms and for tackling classification problems by exploiting temporal learning techniques. On the other hand, neuromorphic studies and experiments have revealed the existence of differ- ent kinds of plasticity and have shown the effect of calcium concentration on synaptic changes. Computational studies have investigated the behavior of spiking networks in the context of supervised, unsupervised, and reinforcement learning. In this paper, we first derive a simplified, almost analytical, model of a second-order memristor, only involving two variables, the mem- conductance, and the temperature, directly attributable to the synaptic efficacy and to the calcium concentration. Then we study in detail the response of a single memristive synapse to the most relevant plasticity models, including cycles of spike pairs, triplets, and quadruplets at different frequencies. Finally, we accurately characterize memristor spiking networks as discrete nonlinear dynamic systems, with mem-conductances as state variables and pre and postsynaptic spikes as inputs and outputs, respectively. The result shows that the model developed in this manuscript can explain and accurately reproduce a significant portion of observed synaptic behaviors, including those not captured by classical spike pair-based STDP models. Furthermore, under such an approach, the global dynamic behavior of memristor networks and the related learning mechanisms can be deeply analyzed by employing advanced nonlinear dynamic techniques
Learning to Discriminate Through Long-Term Changes of Dynamical Synaptic Transmission
Short-term synaptic plasticity is modulated by long-term synaptic
changes. There is, however, no general agreement on the computational
role of this interaction. Here, we derive a learning rule for the release
probability and the maximal synaptic conductance in a circuit model
with combined recurrent and feedforward connections that allows learning
to discriminate among natural inputs. Short-term synaptic plasticity
thereby provides a nonlinear expansion of the input space of a linear
classifier, whereas the random recurrent network serves to decorrelate
the expanded input space. Computer simulations reveal that the twofold
increase in the number of input dimensions through short-term synaptic
plasticity improves the performance of a standard perceptron up to 100%.
The distributions of release probabilities and maximal synaptic conductances
at the capacity limit strongly depend on the balance between excitation
and inhibition. The model also suggests a new computational
interpretation of spikes evoked by stimuli outside the classical receptive
field. These neuronal activitiesmay reflect decorrelation of the expanded
stimulus space by intracortical synaptic connections
Homeostatic plasticity and external input shape neural network dynamics
In vitro and in vivo spiking activity clearly differ. Whereas networks in
vitro develop strong bursts separated by periods of very little spiking
activity, in vivo cortical networks show continuous activity. This is puzzling
considering that both networks presumably share similar single-neuron dynamics
and plasticity rules. We propose that the defining difference between in vitro
and in vivo dynamics is the strength of external input. In vitro, networks are
virtually isolated, whereas in vivo every brain area receives continuous input.
We analyze a model of spiking neurons in which the input strength, mediated by
spike rate homeostasis, determines the characteristics of the dynamical state.
In more detail, our analytical and numerical results on various network
topologies show consistently that under increasing input, homeostatic
plasticity generates distinct dynamic states, from bursting, to
close-to-critical, reverberating and irregular states. This implies that the
dynamic state of a neural network is not fixed but can readily adapt to the
input strengths. Indeed, our results match experimental spike recordings in
vitro and in vivo: the in vitro bursting behavior is consistent with a state
generated by very low network input (< 0.1%), whereas in vivo activity suggests
that on the order of 1% recorded spikes are input-driven, resulting in
reverberating dynamics. Importantly, this predicts that one can abolish the
ubiquitous bursts of in vitro preparations, and instead impose dynamics
comparable to in vivo activity by exposing the system to weak long-term
stimulation, thereby opening new paths to establish an in vivo-like assay in
vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.
Astrocytes: Orchestrating synaptic plasticity?
Synaptic plasticity is the capacity of a preexisting connection between two neurons to change in strength as a function of neural activity. Because synaptic plasticity is the major candidate mechanism for learning and memory, the elucidation of its constituting mechanisms is of crucial importance in many aspects of normal and pathological brain function. In particular, a prominent aspect that remains debated is how the plasticity mechanisms, that encompass a broad spectrum of temporal and spatial scales, come to play together in a concerted fashion. Here we review and discuss evidence that pinpoints to a possible non-neuronal, glial candidate for such orchestration: the regulation of synaptic plasticity by astrocytes
Distributed ARTMAP
Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, ART stability typically requires winner-take-all coding, which may cause category proliferation in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to combine the computational advantages of MLP and ART systems in a real-time neural network for supervised learning. This system incorporates elements of the unsupervised dART model as well as new features, including a content-addressable memory (CAM) rule. Simulations show that dARTMAP retains fuzzy ARTMAP accuracy while significantly improving memory compression. The model's computational learning rules correspond to paradoxical cortical data.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
Astrocytes: orchestrating synaptic plasticity?
Synaptic plasticity is the capacity of a preexisting connection between two
neurons to change in strength as a function of neural activity. Because
synaptic plasticity is the major candidate mechanism for learning and memory,
the elucidation of its constituting mechanisms is of crucial importance in many
aspects of normal and pathological brain function. In particular, a prominent
aspect that remains debated is how the plasticity mechanisms, that encompass a
broad spectrum of temporal and spatial scales, come to play together in a
concerted fashion. Here we review and discuss evidence that pinpoints to a
possible non-neuronal, glial candidate for such orchestration: the regulation
of synaptic plasticity by astrocytes.Comment: 63 pages, 4 figure
- …