75 research outputs found
Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling.
The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal
pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does
this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that
the neuron learns two components: (1) the level of average background activity and (2) specific
spike times of a pattern.
Taking advantage of these findings, a network is developed that can train recognisers for longer
spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons
that are mutually connected by plastic synapses and subject to a global winner-takes-all
mechanism, chains of neurons can form where each neuron is selective to a different segment
of a repeating input pattern, and the neurons are feedforwardly connected in such a way that
both the correct stimulus and the firing of the previous neurons are required in order to activate
the next neuron in the chain. This is akin to a simple class of finite state automata.
Following this, a novel resource-based STDP learning rule is introduced. The learning rule
has several advantages over typical implementations of STDP and results in synaptic statistics
which match favourably with those observed experimentally. For example, synaptic weight
distributions and the presence of silent synapses match experimental data
The role of excitation and inhibition in learning and memory formation
The neurons in the mammalian brain can be classified into two broad categories: excitatory and inhibitory neurons. The former has been historically associated to information processing whereas the latter has been linked to network homeostasis. More recently, inhibitory neurons have been related to several computational roles such as the gating of signal propagation, mediation of network competition, or learning. However, the ways by which excitation and inhibition can regulate learning have not been exhaustively explored. Here we explore several model systems to investigate the role of excitation and inhibition in learning and memory formation. Additionally, we investigate the effect that third factors such as neuromodulators and network state exert over this process. Firstly, we explore the effect of neuromodulators onto excitatory neurons and excitatory plasticity. Next, we investigate the plasticity rules governing excitatory connections while the neural network oscillates in a sleep-like cycle, shifting between Up and Down states. We observe that this plasticity rule depends on the state of the network. To study the role of inhibitory neurons in learning, we then investigate the mechanisms underlying place field emergence and consolidation. Our simulations suggest that dendrite-targeting interneurons play an important role in both promoting the emergence of new place fields and in ensuring place field stabilization. Soma-targeting interneurons, on the other hand, are suggested to be related to quick, context-specific changes in the assignment of place and silent cells. We next investigate the mechanisms underlying the plasticity of synaptic connections from specific types of interneurons. Our experiments suggest that different types of interneurons undergo different synaptic plasticity rules. Using a computational model, we implement these plasticity rules in a simplified network. Our simulations indicate that the interaction between the different forms of plasticity account for the development of stable place fields across multiple environments. Moreover, these plasticity rules seems to be gated by the postsynaptic membrane voltage. Inspired by these findings, we propose a voltage-based inhibitory synaptic plasticity rule. As a consequence of this rule, the network activity is kept controlled by the imposition of a maximum pyramidal cell firing rate. Remarkably, this rule does not constrain the postsynaptic firing rate to a narrow range. Overall, through multiple stages of interactions between experiments and computational simulations, we investigate the effect of excitation and inhibition in learning. We propose mechanistic explanations for experimental data, and suggest possible functional implications of experimental findings. Finally, we proposed a voltage-based inhibitory synaptic plasticity model as a mechanism for flexible network homeostasis.Open Acces
Tomosyn-1 is a Novel Molecular Target of the Ubiquitin-Proteasome System and Underlies Synaptic Architecture
The efficacy of information transfer at synaptic contacts between excitatory central neurons undergoes continual modification in response to neuronal activity and physiological state. This plasticity in synaptic transmission may involve changes in presynaptic release probability, postsynaptic receptor number and sensitivity, and/or synaptic morphology. The molecular mechanisms influencing these distinctive targets are an investigative focus given their importance in learning, memory, and cognitive function. Much attention has focused on transcriptional and translational regulation of the synapse, but post-translational modification and directed turnover of specific protein components is also recognized as critical. Central to targeted protein degradation is the ubiquitin-proteasome system (UPS). While an increasing number of synaptic proteins are known to be susceptible to activity-dependent regulation by the UPS, relatively little has focused on the action of the UPS on known negative regulators of synaptic function. The SNARE protein Tomosyn-1 (Tomo-1) directly inhibits evoked release at central synapses, but it is also present post-synaptically, where no known function has been identified. It was recently discovered that the related Tomosyn-2 protein is subject to ubiquitination and degradation in neuroendocrine pancreatic beta cells, suggesting their secretory activity may be under control of the UPS. The general hypothesis of this dissertation is that a central mechanism underlying modulation of the synapse is the targeted degradation of Tomo-1.
This dissertation made use of a series of complementary biochemical, molecular, and imaging technologies in hippocampal neuronal culture. We demonstrate that Tomo- 1 protein level, independently of its SNARE domain, positively correlates with postsynaptic dendritic spine density in vivo. The data also indicate that the UPS regulates steady-state Tomo-1 level and function. Immunoprecipitated Tomo-1 was ubiquitinated and co-precipitated the E3 ligase HRD1, and both effects dramatically increased upon proteasome inhibition. The interaction was also found in situ, via fixed- cell proximity ligation assay. In vitro reactions indicated direct, HRD1 concentration- dependent Tomo-1 ubiquitination. Furthermore, we demonstrated that neuronal HRD1 knockdown increased Tomo-1 level, and consequently, dendritic spine density. This effect was abrogated by concurrent knockdown of Tomo-1, strongly suggesting a direct HRD1/Tomo-1 effector relationship. We confirmed Tomo-1 is a UPS substrate by identifying 12 lysine residues which are ubiquitinated by HRD1 and generated a non- ubiquitinateable Tomo-1 mutant. Finally, we performed Tomo-1 isoform and homologue comparisons, protein structure modeling, and antibody-based domain targeting of Tomo-1 in neuronal lysates to identify four lysine residues which are highly likely to be ubiquitinated in vivo. In summary, the results of this dissertation indicate that the UPS participates in tuning synaptic efficacy via the precise regulation of neuronal Tomo-1 and spine density. These findings implicate Tomo-1 as a prime target of UPS mediated degradation in the implementation of morphological plasticity in central neurons.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143947/1/jsaldate_1.pd
Improving Associative Memory in a Network of Spiking Neurons
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work.
The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information.
In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance.
To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory
Facilitatory neural dynamics for predictive extrapolation
Neural conduction delay is a serious issue for organisms that need to act in real
time. Perceptual phenomena such as the flash-lag effect (FLE, where the position of
a moving object is perceived to be ahead of a brief flash when they are actually colocalized)
suggest that the nervous system may perform extrapolation to compensate
for delay. However, the precise neural mechanism for extrapolation has not been fully
investigated.
The main hypothesis of this dissertation is that facilitating synapses, with their
dynamic sensitivity to the rate of change in the input, can serve as a neural basis for
extrapolation. To test this hypothesis, computational and biologically inspired models
are proposed in this dissertation. (1) The facilitatory activation model (FAM) was
derived and tested in the motion FLE domain, showing that FAM with smoothing
can account for human data. (2) FAM was given a neurophysiological ground by
incorporating a spike-based model of facilitating synapses. The spike-based FAM was
tested in the luminance FLE domain, successfully explaining extrapolation in both
increasing and decreasing luminance conditions. Also, inhibitory backward masking
was suggested as a potential cellular mechanism accounting for the smoothing effect.
(3) The spike-based FAM was extended by combining it with spike-timing-dependent
plasticity (STDP), which allows facilitation to go across multiple neurons. Through STDP, facilitation can selectively propagate to a specific direction, which enables the
multi-neuron FAM to express behavior consistent with orientation FLE. (4) FAM
was applied to a modified 2D pole-balancing problem to test whether the biologically
inspired delay compensation model can be utilized in engineering domains. Experimental
results suggest that facilitating activity greatly enhances real time control
performance under various forms of input delay as well as under increasing delay and
input blank-out conditions.
The main contribution of this dissertation is that it shows an intimate link between
the organism-level problem of delay compensation, perceptual phenomenon of
FLE, computational function of extrapolation, and neurophysiological mechanisms
of facilitating synapses (and STDP). The results are expected to shed new light on
real-time and predictive processing in the brain, and help understand specific neural
processes such as facilitating synapses
A stochastic model of hippocampal synaptic plasticity with geometrical readout of enzyme dynamics
Discovering the rules of synaptic plasticity is an important step for understanding
brain learning. Existing plasticity models are either (1) top-Âdown and interpretable, but not flex-
ible enough to account for experimental data, or (2) bottom-Âup and biologically realistic, but too
intricate to interpret and hard to fit to data. To avoid the shortcomings of these approaches, we
present a new plasticity rule based on a geometrical readout mechanism that flexibly maps synaptic
enzyme dynamics to predict plasticity outcomes. We apply this readout to a multi-Âtimescale model
of hippocampal synaptic plasticity induction that includes electrical dynamics, calcium, CaMKII
and calcineurin, and accurate representation of intrinsic noise sources. Using a single set of model
parameters, we demonstrate the robustness of this plasticity rule by reproducing nine published ex
vivo experiments covering various spike-Âtiming and frequency-Âdependent plasticity induction proto-
cols, animal ages, and experimental conditions. Our model also predicts that in vivo-Âlike spike timing
irregularity strongly shapes plasticity outcome. This geometrical readout modelling approach can be
readily applied to other excitatory or inhibitory synapses to discover their synaptic plasticity rules
Emergent Dynamics in Neocortical Microcircuits
Interactions among neurons can take place in a wide variety of forms. It is the goal of this thesis to investigate the properties and implications of a number of these interactions that we believe are relevant for information processing in the brain. Neuroscience has progressed considerably in identifying the diverse neuronal cell-types and providing detailed information about their individual morphological, genetic and electrophysiological properties. It remains a great challenge to identify how this diversity of cells interacts at the microcircuit level. This task is made more complex by the fact that the forms of interaction are not always obvious or simple to observe, even with advanced scientific equipment. In order to achieve a better understanding and envision possible implications of the concerted activity of multiple neurons, experiments and models must often be used jointly and iteratively. In this thesis I first present the development of a computer-assisted system for multi-electrode patch-clamp that enabled new kinds of experiments, allowing qualitatively different information to be obtained concerning the interaction of multiple neurons. In the following chapters I describe the different questions addressed and approaches utilized in the investigation of neuronal interactions using multi-electrode patch-clamp experiments. The principles behind the clustered organization of synaptic connectivity in Layer V of the somatosensory cortex are the first experimental finding presented. I then quantify the ephaptic coupling between neurons and how apparently minute signals might help correlate the activity of many neurons. Next, the ubiquity of a neocortical microcircuit responsible for frequency-dependent disynaptic inhibition is demonstrated and the summation properties of this microcircuit are then analyzed. Finally a model to explain the interactions between gap junctions and synaptic transmission in the olfactory bulb is proposed
- …