18 research outputs found

    Neuronal Models of Motor Sequence Learning in the Songbird

    Get PDF
    Communication of complex content is an important ability in our everyday life. For communication to be possible, several requirements need to be met: The individual communicated to has to learn to associate a certain meaning with a given sound. In the brain, this sound is represented as a spatio-temporal pattern of spikes, which will thus have to be associated with a different spike pattern representing its meaning. In this thesis, models for associative learning in spiking neurons are introduced in chapters 6 and 7. There, a new biologically plausible learning mechanism is proposed, where a property of the neuronal dynamics - the hyperpolarization of a neuron after each spike it produces - is coupled with a homeostatic plasticity mechanism, which acts to balance inputs into the neuron. In chapter 6, the mechanism used is a version of spike timing dependent plasticity (STDP), a property that was experimentally observed: The direction and amplitude of synaptic change depends on the precise timing of pre- and postsynaptic spiking activity. This mechanism is applied to associative learning of output spikes in response to purely spatial spiking patterns. In chapter 7, a new learning rule is introduced, which is derived from the objective of a balanced membrane potential. This learning rule is shown to be equivalent to a version of STDP and applied to associative learning of precisely timed output spikes in response to spatio-temporal input patterns. The individual communicating has to learn to reproduce certain sounds (which can be associated with a given meaning). To that end, a memory of the sound sequence has to be formed. Since sound sequences are represented as sequences of activation patterns in the brain, learning of a given sequence of spike patterns is an interesting problem for theoretical considerations Here, it is shown that the biologically plausible learning mechanism introduced for associative learning enables recurrently coupled networks of spiking neurons to learn to reproduce given sequences of spikes. These results are presented in chapter 9. Finally, the communicator has to translate the sensory memory into motor actions that serve to reproduce the target sound. This process is investigated in the framework of inverse model learning, where the learner learns to invert the action-perception cycle by mapping perceptions back onto the actions that caused them. Two different setups for inverse model learning are investigated: In chapter 5, a simple setup for inverse model learning is coupled with the learning algorithm used for Perceptron learning in chapter 6 and it is shown that models of the sound generation and perception process, which are non-linear and non-local in time, can be inverted, if the width of the distribution of time delays of self-generated inputs caused by an individual motor spike is not too large. This limitation is mitigated by the model introduced in chapter 8. Both these models have experimentally testable consequences, namely a dip in the autocorrelation function of the spike times in the motor population of the duration of the loop delay, i.e. the time it takes for a motor activation to cause a sound and thus a sensory activation and the time that this sensory activation takes to be looped back to the motor population. Furthermore, both models predict neurons, which are active during the sound generation and during the passive playback of the sound with a time delay equivalent to the loop delay. Finally, the inverse model presented in chapter 8 additionally predicts mirror neurons without a time delay. Both types of mirror neurons have been observed in the songbird [GKGH14, PPNM08], a popular animal model for vocal imitation learning

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Framework of hierarchy for neural theory

    Get PDF

    Analyses at microscopic, mesoscopic, and mean-field scales

    Get PDF
    Die Aktivität des Hippocampus im Tiefschlaf ist geprägt durch sharp wave-ripple Komplexe (SPW-R): kurze (50–100 ms) Phasen mit erhöhter neuronaler Aktivität, moduliert durch eine schnelle “Ripple”-Oszillation (140–220 Hz). SPW-R werden mit Gedächtniskonsolidierung in Verbindung gebracht, aber ihr Ursprung ist unklar. Sowohl exzitatorische als auch inhibitorische Neuronpopulationen könnten die Oszillation generieren. Diese Arbeit analysiert Ripple-Oszillationen in inhibitorischen Netzwerkmodellen auf mikro-, meso- und makroskopischer Ebene und zeigt auf, wie die Ripple-Dynamik von exzitatorischem Input, inhibitorischer Kopplungsstärke und dem Rauschmodell abhängt. Zuerst wird ein stark getriebenes Interneuron-Netzwerk mit starker, verzögerter Kopplung analysiert. Es wird eine Theorie entwickelt, die die Drift-bedingte Feuerdynamik im Mean-field Grenzfall beschreibt. Die Ripple-Frequenz und die Dynamik der Membranpotentiale werden analytisch als Funktion des Inputs und der Netzwerkparameter angenähert. Die Theorie erklärt, warum die Ripple-Frequenz im Verlauf eines SPW-R-Ereignisses sinkt (intra-ripple frequency accommodation, IFA). Weiterhin zeigt eine numerische Analyse, dass ein alternatives Modell, basierend auf einem transienten Störungseffekt in einer schwach gekoppelten Interneuron-Population, unter biologisch plausiblen Annahmen keine IFA erzeugen kann. IFA kann somit zur Modellauswahl beitragen und deutet auf starke, verzögerte inhibitorische Kopplung als plausiblen Mechanismus hin. Schließlich wird die Anwendbarkeit eines kürzlich entwickelten mesoskopischen Ansatzes für die effiziente Simulation von Ripples in endlich großen Netzwerken geprüft. Dabei wird das Rauschen nicht im Input der Neurone beschrieben, sondern als stochastisches Feuern entsprechend einer Hazard-Rate. Es wird untersucht, wie die Wahl des Hazards die dynamische Suszeptibilität einzelner Neurone, und damit die Ripple-Dynamik in rekurrenten Interneuron-Netzwerken beeinflusst.Hippocampal activity during sleep or rest is characterized by sharp wave-ripples (SPW-Rs): transient (50–100 ms) periods of elevated neuronal activity modulated by a fast oscillation — the ripple (140–220 Hz). SPW-Rs have been linked to memory consolidation, but their generation mechanism remains unclear. Multiple potential mechanisms have been proposed, relying on excitation and/or inhibition as the main pacemaker. This thesis analyzes ripple oscillations in inhibitory network models at micro-, meso-, and macroscopic scales and elucidates how the ripple dynamics depends on the excitatory drive, inhibitory coupling strength, and the noise model. First, an interneuron network under strong drive and strong coupling with delay is analyzed. A theory is developed that captures the drift-mediated spiking dynamics in the mean-field limit. The ripple frequency as well as the underlying dynamics of the membrane potential distribution are approximated analytically as a function of the external drive and network parameters. The theory explains why the ripple frequency decreases over the course of an event (intra-ripple frequency accommodation, IFA). Furthermore, numerical analysis shows that an alternative inhibitory ripple model, based on a transient ringing effect in a weakly coupled interneuron population, cannot account for IFA under biologically realistic assumptions. IFA can thus guide model selection and provides new support for strong, delayed inhibitory coupling as a mechanism for ripple generation. Finally, a recently proposed mesoscopic integration scheme is tested as a potential tool for the efficient numerical simulation of ripple dynamics in networks of finite size. This approach requires a switch of the noise model, from noisy input to stochastic output spiking mediated by a hazard function. It is demonstrated how the choice of a hazard function affects the linear response of single neurons and therefore the ripple dynamics in a recurrent interneuron network
    corecore