69 research outputs found

    A Computational Investigation of Neural Dynamics and Network Structure

    No full text
    With the overall goal of illuminating the relationship between neural dynamics and neural network structure, this thesis presents a) a computer model of a network infrastructure capable of global broadcast and competition, and b) a study of various convergence properties of spike-timing dependent plasticity (STDP) in a recurrent neural network. The first part of the thesis explores the parameter space of a possible Global Neuronal Workspace (GNW) realised in a novel computational network model using stochastic connectivity. The structure of this model is analysed in light of the characteristic dynamics of a GNW: broadcast, reverberation, and competition. It is found even with careful consideration of the balance between excitation and inhibition, the structural choices do not allow agreement with the GNW dynamics, and the implications of this are addressed. An additional level of competition – access competition – is added, discussed, and found to be more conducive to winner-takes-all competition. The second part of the thesis investigates the formation of synaptic structure due to neural and synaptic dynamics. From previous theoretical and modelling work, it is predicted that homogeneous stimulation in a recurrent neural network with STDP will create a self-stabilising equilibrium amongst synaptic weights, while heterogeneous stimulation will induce structured synaptic changes. A new factor in modulating the synaptic weight equilibrium is suggested from the experimental evidence presented: anti-correlation due to inhibitory neurons. It is observed that the synaptic equilibrium creates competition amongst synapses, and those specifically stimulated during heterogeneous stimulation win out. Further investigation is carried out in order to assess the effect that more complex STDP rules would have on synaptic dynamics, varying parameters of a trace STDP model. There is little qualitative effect on synaptic dynamics under low frequency (< 25Hz) conditions, justifying the use of simple STDP until further experimental or theoretical evidence suggests otherwise

    Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks

    Get PDF
    The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks

    The role of excitation and inhibition in learning and memory formation

    Get PDF
    The neurons in the mammalian brain can be classified into two broad categories: excitatory and inhibitory neurons. The former has been historically associated to information processing whereas the latter has been linked to network homeostasis. More recently, inhibitory neurons have been related to several computational roles such as the gating of signal propagation, mediation of network competition, or learning. However, the ways by which excitation and inhibition can regulate learning have not been exhaustively explored. Here we explore several model systems to investigate the role of excitation and inhibition in learning and memory formation. Additionally, we investigate the effect that third factors such as neuromodulators and network state exert over this process. Firstly, we explore the effect of neuromodulators onto excitatory neurons and excitatory plasticity. Next, we investigate the plasticity rules governing excitatory connections while the neural network oscillates in a sleep-like cycle, shifting between Up and Down states. We observe that this plasticity rule depends on the state of the network. To study the role of inhibitory neurons in learning, we then investigate the mechanisms underlying place field emergence and consolidation. Our simulations suggest that dendrite-targeting interneurons play an important role in both promoting the emergence of new place fields and in ensuring place field stabilization. Soma-targeting interneurons, on the other hand, are suggested to be related to quick, context-specific changes in the assignment of place and silent cells. We next investigate the mechanisms underlying the plasticity of synaptic connections from specific types of interneurons. Our experiments suggest that different types of interneurons undergo different synaptic plasticity rules. Using a computational model, we implement these plasticity rules in a simplified network. Our simulations indicate that the interaction between the different forms of plasticity account for the development of stable place fields across multiple environments. Moreover, these plasticity rules seems to be gated by the postsynaptic membrane voltage. Inspired by these findings, we propose a voltage-based inhibitory synaptic plasticity rule. As a consequence of this rule, the network activity is kept controlled by the imposition of a maximum pyramidal cell firing rate. Remarkably, this rule does not constrain the postsynaptic firing rate to a narrow range. Overall, through multiple stages of interactions between experiments and computational simulations, we investigate the effect of excitation and inhibition in learning. We propose mechanistic explanations for experimental data, and suggest possible functional implications of experimental findings. Finally, we proposed a voltage-based inhibitory synaptic plasticity model as a mechanism for flexible network homeostasis.Open Acces

    Distinct Effects of Perceptual Quality on Auditory Word Recognition, Memory Formation and Recall in a Neural Model of Sequential Memory

    Get PDF
    Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA) networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing-dependent plasticity (STDP), which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall

    Development of a Bio-Inspired Computational Astrocyte Model for Spiking Neural Networks

    Get PDF
    The mammalian brain is the most capable and complex computing entity known today. For many years there has been research focused on reproducing the brain\u27s processing capabilities. An early example of this endeavor was the perceptron which has become the core building block of neural network models in the deep learning era. Deep learning has had tremendous success in well-defined tasks like object detection, games like go and chess, and automatic speech recognition. In fact, some deep learning models can match and even outperform humans in specific situations. However, in general, they require much more training, have higher power consumption, are more susceptible to noise and adversarial perturbations, and have very different behavior than their biological counterparts. In contrast, spiking neural network models take a step closer to biology, and in some cases behave identically to measurements of real neurons. Though there has been advancement, spiking neural networks are far from reaching their full potential, in part because the full picture of their biological underpinnings is unclear. This work attempts to reduce that gap further by exploring a bio-inspired configuration of spiking neurons coupled with a computational astrocyte model. Astrocytes, initially thought to be passive support cells in the brain are now known to actively participate in neural processing. They are believed to be critical for some processes, such as neural synchronization, self-repair, and learning. The developed astrocyte model is geared towards synaptic plasticity and is shown to improve upon existing local learning rules, as well as create a generalized approach to local spike-timing-dependent plasticity. Beyond generalizing existing learning approaches, the astrocyte is able to leverage temporal and spatial integration to improve convergence, and tolerance to noise. The astrocyte model is expanded to influence multiple synapses and configured for a specific learning task. A single astrocyte paired with a single leaky integrate and fire neuron is shown to converge on a solution in 2, 3, and 4 synapse configurations. Beyond the more concrete improvements in plasticity, this work provides a foundation for exploring supervisory astrocyte-like elements in spiking neural networks, and a framework to implement and extend many three-factor learning rules. Overall, this work brings the field a bit closer to leveraging some of the distinct advantages of biological neural networks

    Synaptic Plasticity and Hebbian Cell Assemblies

    Get PDF
    Synaptic dynamics are critical to the function of neuronal circuits on multiple timescales. In the first part of this dissertation, I tested the roles of action potential timing and NMDA receptor composition in long-term modifications to synaptic efficacy. In a computational model I showed that the dynamics of the postsynaptic [Ca2+] time course can be used to map the timing of pre- and postsynaptic action potentials onto experimentally observed changes in synaptic strength. Using dual patch-clamp recordings from cultured hippocampal neurons, I found that NMDAR subtypes can map combinations of pre- and postsynaptic action potentials onto either long-term potentiation (LTP) or depression (LTD). LTP and LTD could even be evoked by the same stimuli, and in such cases the plasticity outcome was determined by the availability of NMDAR subtypes. The expression of LTD was increasingly presynaptic as synaptic connections became more developed. Finally, I found that spike-timing-dependent potentiability is history-dependent, with a non-linear relationship to the number of pre- and postsynaptic action potentials. After LTP induction, subsequent potentiability recovered on a timescale of minutes, and was dependent on the duration of the previous induction. While activity-dependent plasticity is putatively involved in circuit development, I found that it was not required to produce small networks capable of exhibiting rhythmic persistent activity patterns called reverberations. However, positive synaptic scaling produced by network inactivity yielded increased quantal synaptic amplitudes, connectivity, and potentiability, all favoring reverberation. These data suggest that chronic inactivity upregulates synaptic efficacy by both quantal amplification and by the addition of silent synapses, the latter of which are rapidly activated by reverberation. Reverberation in previously inactivated networks also resulted in activity-dependent outbreaks of spontaneous network activity. Applying a model of short-term synaptic dynamics to the network level, I argue that these experimental observations can be explained by the interaction between presynaptic calcium dynamics and short-term synaptic depression on multiple timescales. Together, the experiments and modeling indicate that ongoing activity, synaptic scaling and metaplasticity are required to endow networks with a level of synaptic connectivity and potentiability that supports stimulus-evoked persistent activity patterns but avoids spontaneous activity

    Learning in large-scale spiking neural networks

    Get PDF
    Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain

    Fast unsupervised learning for visual pattern recognition using spike timing dependent plasticity

    Get PDF
    Real-time learning needs algorithms operating in a fast speed comparable to human or animal, however this is a huge challenge in processing visual inputs. Research shows a biological brain can process complicated real-life recognition scenarios at milliseconds scale. Inspired by biological system, in this paper, we proposed a novel real-time learning method by combing the spike timing-based feed-forward spiking neural network (SNN) and the fast unsupervised spike timing dependent plasticity learning method with dynamic post-synaptic thresholds. Fast cross-validated experiments using MNIST database showed the high e�ciency of the proposed method at an acceptable accuracy

    Synaptic consolidation: from synapses to behavioral modeling

    Get PDF
    Synaptic plasticity, a key process for memory formation, manifests itself across different time scales ranging from a few seconds for plasticity induction up to hours or even years for consolidation and memory retention. We developed a three-layered model of synaptic consolidation that accounts for data across a large range of experimental conditions. Consolidation occurs in the model through the interaction of the synaptic efficacy with a scaffolding variable by a read-write process mediated by a tagging-related variable. Plasticity-inducing stimuli modify the efficacy, but the state of tag and scaffold can only change if a write protection mechanism is overcome. Our model makes a link from depotentiation protocols in vitro to behavioral results regarding the influence of novelty on inhibitory avoidance memory in rats
    • …
    corecore