11 research outputs found
Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling.
The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal
pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does
this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that
the neuron learns two components: (1) the level of average background activity and (2) specific
spike times of a pattern.
Taking advantage of these findings, a network is developed that can train recognisers for longer
spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons
that are mutually connected by plastic synapses and subject to a global winner-takes-all
mechanism, chains of neurons can form where each neuron is selective to a different segment
of a repeating input pattern, and the neurons are feedforwardly connected in such a way that
both the correct stimulus and the firing of the previous neurons are required in order to activate
the next neuron in the chain. This is akin to a simple class of finite state automata.
Following this, a novel resource-based STDP learning rule is introduced. The learning rule
has several advantages over typical implementations of STDP and results in synaptic statistics
which match favourably with those observed experimentally. For example, synaptic weight
distributions and the presence of silent synapses match experimental data
Limits to the Development of Feed-Forward Structures in Large Recurrent Neuronal Networks
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it
Limits to the Development of Feed-Forward Structures in Large Recurrent Neuronal Networks
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it
A reafferent and feed-forward model of song syntax generation in the Bengalese finch
Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons
Topological Effects of Synaptic Time Dependent Plasticity
We show that the local Spike Timing-Dependent Plasticity (STDP) rule has the
effect of regulating the trans-synaptic weights of loops of any length within a
simulated network of neurons. We show that depending on STDP's polarity,
functional loops are formed or eliminated in networks driven to normal spiking
conditions by random, partially correlated inputs, where functional loops
comprise weights that exceed a non-zero threshold. We further prove that STDP
is a form of loop-regulating plasticity for the case of a linear network
comprising random weights drawn from certain distributions. Thus a notable
local synaptic learning rule makes a specific prediction about synapses in the
brain in which standard STDP is present: that under normal spiking conditions,
they should participate in predominantly feed-forward connections at all
scales. Our model implies that any deviations from this prediction would
require a substantial modification to the hypothesized role for standard STDP.
Given its widespread occurrence in the brain, we predict that STDP could also
regulate long range synaptic loops among individual neurons across all brain
scales, up to, and including, the scale of global brain network topology.Comment: 26 pages, 5 figure
Synaptic modification and entrained phase are phase dependent in STDP
Synapse strength can be modified in an activity dependent manner, in which the temporal relationship between pre- and post-synaptic spikes plays a major role. This spike timing dependent plasticity (STDP) has profound implications in neural coding, computation and functionality, and this line of research is booming in recent years. Many functional roles of STDP have been put forward. Because the STDP learning curve is strongly nonlinear, initial state may have great impacts on the eventual state of the system. However, this feature has not been explored before. This paper proposes two possible functional roles of STDP by considering the influence of initial state in modeling studies. First, STDP could lead to phase-dependent synaptic modification that have been reported in experiments. Second, rather than leading to a fixed phase relation between pre- and post-synaptic neurons, STDP that includes suppression between the effects of spike pairs lead to a distributed entrained phase which also depend on the initial relative phase. This simple mechanism is proposed here to have the ability to organize temporal firing pattern into dynamic cell assemblies in a probabilistic manner and cause cell assemblies to update in a deterministic manner. It has been demonstrated that olfactory system in locust, and even other sensory systems, adopts the strategy of combining probabilistic cell assemblies with their deterministic update to encode information. These results suggest that STDP rule is a potentially powerful mechanism by which higher network functions emerge
Multi-Scale Expressions of One Optimal State Regulated by Dopamine in the Prefrontal Cortex
The prefrontal cortex (PFC), which plays key roles in many higher cognitive processes, is a hierarchical system consisting of multi-scale organizations. Optimizing the working state at each scale is essential for PFC's information processing. Typical optimal working states at different scales have been separately reported, including the dopamine-mediated inverted-U profile of the working memory (WM) at the system level, critical dynamics at the network level, and detailed balance of excitatory and inhibitory currents (E/I balance) at the cellular level. However, it remains unclear whether these states are scale-specific expressions of the same optimal state and, if so, what is the underlying mechanism for its regulation traversing across scales. Here, by studying a neural network model, we show that the optimal performance of WM co-occurs with the critical dynamics at the network level and the E/I balance at the level of individual neurons, suggesting the existence of a unified, multi-scale optimal state for the PFC. Importantly, such a state could be modulated by dopamine at the synaptic level through a series of U or inverted-U profiles. These results suggest that seemingly different optimal states for specific scales are multi-scale expressions of one condition regulated by dopamine. Our work suggests a cross-scale perspective to understand the PFC function and its modulation
Recommended from our members
Timing in the cerebellum : a matter of network inhibition
textThe motor functions of an animal require precisely timed and coordinated sequences of movements. The cerebellum is crucial for performing these functions with precision. To investigate cerebellar computations involved in precise motor movements, behavioral paradigms such as delay eyelid conditioning have been used. Delay eyelid conditioning trains an animal to close its eye in response to a previously neutral stimulus. The timing of the eyelid closure responses suggests that the cerebellum is capable of keeping track of the elapsed time since the onset of the stimulus. This dissertation proposes a network mechanism for cerebellar timing based on biologically informed simulations of the cerebellum. In chapter 2, a simulation with over a million cells is described. This simulation approaches the observed cerebellar connectivity in several well studied mammals. Graphics processing units (GPUs) provide the computational power necessary to perform this simulation at a practical speed. This chapter describes simulation algorithms that efficiently utilize GPUs. In
chapter 3, the simulation is used to explore cerebellar timing mechanisms. The lateral inhibition among cerebellar Golgi cells is observed to be a potential mechanism for robust timing. Lateral Golgi inhibition enables the simulation to better replicate animal eyelid conditioning behavior for longer inter-stimulus intervals. In chapter 4, the emergent network mechanisms of lateral Golgi inhibition are analyzed by decomposing the network into its individual components. This component analysis demonstrates that nonreciprocal connectivity (where one Golgi cell inhibits another but does not receive inhibition in return) is useful for timing. Specifically, removing nonreciprocal connectivity greatly degrades the simulation's ability to keep track of time. This implies that the aforementioned component analyses are relevant to the emergent timing mechanisms of the network. Finally, in chapter 5, this dissertation discusses the relevance and limitations of the computational approach, biological predictions, and component analysis presented in previous chapters.Neuroscienc