75 research outputs found

    Learning-based Approaches for Controlling Neural Spiking

    Get PDF
    We consider the problem of controlling populations of interconnected neurons using extrinsic stimulation. Such a problem, which is relevant to applications in both basic neuroscience as well as brain medicine, is challenging due to the nonlinearity of neuronal dynamics and the highly unpredictable structure of underlying neuronal networks. Compounding this difficulty is the fact that most neurostimulation technologies offer a single degree of freedom to actuate tens to hundreds of interconnected neurons. To meet these challenges, here we consider an adaptive, learning-based approach to controlling neural spike trains. Rather than explicitly modeling neural dynamics and designing optimal controls, we instead synthesize a so-called control network (CONET) that interacts with the spiking network by maximizing the Shannon mutual information between it and the realized spiking outputs. Thus, the CONET learns a representation of the spiking network that subsequently allows it to learn suitable control signals through a reinforcement-type mechanism. We demonstrate feasibility of the approach by controlling networks of stochastic spiking neurons, wherein desired patterns are induced for neuron-to-actuator ratios in excess of 10 to 1

    Contributions of synaptic filters to models of synaptically stored memory

    No full text
    The question of how neural systems encode memories in one-shot without immediately disrupting previously stored information has puzzled theoretical neuroscientists for years and it is the central topic of this thesis. Previous attempts on this topic, have proposed that synapses probabilistically update in response to plasticity inducing stimuli to effectively delay the degradation of old memories in the face of ongoing memory storage. Indeed, experiments have shown that synapses do not immediately respond to plasticity inducing stimuli, since these must be presented many times before synaptic plasticity is expressed. Such a delay could be due to the stochastic nature of synaptic plasticity or perhaps because induction signals are integrated before overt strength changes occur.The later approach has been previously applied to control fluctuations in neural development by low-pass filtering induction signals before plasticity is expressed. In this thesis we consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals to a threshold before expressing plasticity. We report novel recall dynamics and considerable improvements in memory lifetimes against a prominent model of synaptically stored memory. With integrating synapses the memory trace initially rises before reaching a maximum and then falls. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. Furthermore, we find that integrating synapses possess natural timescales that can be used to consider the transition to late-phase plasticity under spaced repetition patterns known to lead to optimal storage conditions. We find that threshold crossing statistics differentiate between massed and spaced memory repetition patterns. However, isolated integrative synapses obtain an insufficient statistical sample to detect the stimulation pattern within a few memory repetitions. We extend the modelto consider the cooperation of well-known intracellular signalling pathways in detecting storage conditions by utilizing the profile of postsynaptic depolarization. We find that neuron wide signalling and local synaptic signals can be combined to detect optimal storage conditions that lead to stable forms of plasticity in a synapse specific manner.These models can be further extended to consider heterosynaptic and neuromodulatory interactions for late-phase plasticity.<br/

    Memory formation and recall in recurrent spiking neural networks

    Get PDF
    Our brain has the capacity to analyze a visual scene in a split second, to learn how to play an instrument, and to remember events, faces and concepts. Neurons underlie all of these diverse functions. Neurons, cells within the brain that generate and transmit electrical activity, communicate with each other through chemical synapses. These synaptic connections dynamically change with experience, a process referred to as synaptic plasticity, which is thought to be at the core of the brain's ability to learn and process the world in sophisticated ways. Our understanding of the rules of synaptic plasticity remains quite limited. To enable efficient computations among neurons or to serve as a trace of memory, synapses must create stable connectivity patterns between neurons. However there remains an insufficient theoretical explanation as to how stable connectivity patterns can form in the presence of synaptic plasticity. Since the dynamics of recurrently connected neurons depend upon their connections, which themselves change in response to the network dynamics, synaptic plasticity and network dynamics have to be treated as a compound system. Due to the nonlinear nature of the system this can be analytically challenging. Utilizing network simulations that model the interplay between the network connectivity and synaptic plasticity can provide valuable insights. However, many existing network models that implement biologically relevant forms of plasticity become unstable. This suggests that current models do not accurately describe the biological networks, which have no difficulty functioning without succumbing to exploding network activity. The instability in these network simulations could originate from the fact that theoretical studies have, almost exclusively, focused on Hebbian plasticity at excitatory synapses. Hebbian plasticity causes connected neurons that are active together to increase the connection strength between them. Biological networks, however, display a large variety of different forms of synaptic plasticity and homeostatic mechanisms, beyond Hebbian plasticity. Furthermore, inhibitory cells can undergo synaptic plasticity as well. These diverse forms of plasticity are active at the same time, and our understanding of the computational role of most of these synaptic dynamics remains elusive. This raises the important question as to whether forms of plasticity that have not been previously considered could -in combination with Hebbian plasticity- lead to stable network dynamics. Here we illustrate that by combining multiple forms of plasticity with distinct roles, a recurrently connected spiking network model self-organizes to distinguish and extract multiple overlapping external stimuli. Moreover we show that the acquired network structures remain stable over hours while plasticity is active. This long-term stability allows the network to function as an associative memory and to correctly classify distorted or partially cued stimuli. During intervals in which no stimulus is shown the network dynamically remembers the last stimulus as selective delay activity. Taken together this work suggest that multiple forms of plasticity and homeostasis on different timescales have to work together to create stable connectivity patterns in neuronal networks which enable them to perform relevant computation

    Online Training of Spiking Recurrent Neural Networks with Phase-Change Memory Synapses

    Full text link
    Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. To address these challenges and enable online learning in memristive neuromorphic RNNs, we present a simulation framework of differential-architecture crossbar arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model. We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial PCM non-idealities. We compare several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates

    Homeostatische Plastizität - algorithmische und klinische Konsequenzen

    Get PDF
    Plasticity supports the remarkable adaptability and robustness of cortical processing. It allows the brain to learn and remember patterns in the sensory world, to refine motor control, to predict and obtain reward, or to recover function after injury. Behind this great flexibility hide a range of plasticity mechanisms, affecting different aspects of neuronal communication. However, little is known about the precise computational roles of some of these mechanisms. Here, we show that the interaction between spike-timing dependent plasticity (STDP), intrinsic plasticity and synaptic scaling enables neurons to learn efficient representations of their inputs. In the context of reward-dependent learning, the same mechanisms allow a neural network to solve a working memory task. Moreover, although we make no any apriori assumptions on the encoding used for representing inputs, the network activity resembles that of brain regions known to be associated with working memory, suggesting that reward-dependent learning may be a central force in working memory development. Lastly, we investigated some of the clinical implications of synaptic scaling and showed that, paradoxically, there are situations in which the very mechanisms that normally are required to preserve the balance of the system, may act as a destabilizing factor and lead to seizures. Our model offers a novel explanation for the increased incidence of seizures following chronic inflammation.Das menschliche Gehirn ist in der Lage sich an dramatische Veränderungen der Umgebung anzupassen. Hinter der Anpassungsfähigkeit des Gehirns stecken verschiedenste ernmechanismen. Einige dieser Mechanismen sind bereits relativ gut erforscht, wahrend bei anderen noch kaum bekannt ist, welche Rolle sie innerhalb der Informationsverarbeitungsprozesse im Gehirn spielen. Hier, soll gezeigt werden, dass das Zusammenspiel von Spike-Timing Dependent Plasticity' (STDP) mit zwei weiteren Prozessen, Synaptic Scaling' und Intrinsic Plasticity' (IP), es Nervenzellen ermöglicht Information effizient zu kodieren. Die gleichen Mechanismen führen dazu, dass ein Netzwerk aus Neuronen in der Lage ist, ein Arbeitsgedächtnis' für vergangene Stimuli zu entwickeln. Durch die Kombination von belohnungsabhängigem STDP und homöostatischen Mechanismen lernt das Netzwerk, die Stimulus-Repräsentationen für mehrere Zeitschritte verfügbar zu halten. Obwohl in unserem Modell-Design keinerlei. Informationen über die bevorzugte Art der Kodierung enthalten sind, finden wir nach Ende des Trainings neuronale Repräsentationen, die denjenigen aus vielen Arbeitsgedächtnis-Experimenten gleichen. Unser Modell zeigt, dass solche Repräsentationen durch Lernen enstehen können und dass Reward-abhängige Prozesse eine zentrale Kraft bei der Entwicklung des Arbeitsgedächtnisses spielen können. Abschliessend werden klinische Konsequenzen einiger Lern-Prozesse untersucht. Wir konnten zeigen, dass der selbe Mechanismus, der normalerweise die Aktivität im Gehirn in Balance hält, in speziellen Situationen auch zu Destabilisierung führen und epileptische Anfälle auslösen kann. Das hier vorgestellte Modell liefert eine neuartige Erklärung zur Entstehung von epileptischen Anfällen bei chronischen Entzündungen

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF
    corecore