576 research outputs found

    Analysis of the intraspinal calcium dynamics and its implications on the plasticity of spiking neurons

    Full text link
    The influx of calcium ions into the dendritic spines through the N-metyl-D-aspartate (NMDA) channels is believed to be the primary trigger for various forms of synaptic plasticity. In this paper, the authors calculate analytically the mean values of the calcium transients elicited by a spiking neuron undergoing a simple model of ionic currents and back-propagating action potentials. The relative variability of these transients, due to the stochastic nature of synaptic transmission, is further considered using a simple Markov model of NMDA receptos. One finds that both the mean value and the variability depend on the timing between pre- and postsynaptic action-potentials. These results could have implications on the expected form of synaptic-plasticity curve and can form a basis for a unified theory of spike time-dependent, and rate based plasticity.Comment: 14 pages, 10 figures. A few changes in section IV and addition of a new figur

    Beyond spike timing: the role of nonlinear plasticity and unreliable synapses

    Get PDF
    Spike-timing-dependent plasticity (STDP) strengthens synapses that are activated immediately before a postsynaptic spike, and weakens those that are activated after a spike. To prevent an uncontrolled growth of the synaptic strengths, weakening must dominate strengthening for uncorrelated spike times. However, this weight-normalization property would preclude Hebbian potentiation when the pre- and postsynaptic neurons are strongly active without specific spike-time correlations. We show that nonlinear STDP as inherent in the data of Markram et al. [(1997) Science 275:213–215] can preserve the benefits of both weight normalization and Hebbian plasticity, and hence can account for learning based on spike-time correlations and on mean firing rates. As examples we consider the moving-threshold property of the Bienenstock–Cooper–Munro rule, the development of direction-selective simple cells by changing short-term synaptic depression, and the joint adaptation of axonal and dendritic delays. Without threshold nonlinearity at low frequencies, the development of direction selectivity does not stabilize in a natural stimulation environment. Without synaptic unreliability there is no causal development of axonal and dendritic delays

    Implementing and investigating a biologically realistic neuron model

    Get PDF
    On nĂ€idatud, et kaltsiumiioonide tööl pĂ”hinevad ĂŒksikneuroni mudelid vĂ”imaldavad erinevate sĂŒnaptilise plastilisuse vormide teket. KĂ€esolevas töös koostati ĂŒks selline mudel ja uuriti mudeli Ă”ppimisvĂ”imet. Koostatud mudeli kĂ€itumine kordas kvalitatiivselt varasemate uurimuste tulemusi, vĂ€lja arvatud korreleeritud sisendmustrite korral. Leiti, et neuron töötab lineaarse filtrina, kuna Ă”pitud sisendmustrit vaid osaliselt nĂ€hes sĂ”ltub vĂ€ljund lineaarselt Ă”pitud mustri osakaalust. TĂ”enĂ€osuslik virgatsaine vabanemine vĂ€hendas oodatult sisendmustrite korrelatsiooni ja parandas infoĂŒlekande tĂ”husust. PeakomponentanalĂŒĂŒsiks neuron töös formuleeritud viisil vĂ”imeline ei olnud. Tulemuste sĂ”ltuvust parameetrite tĂ€psetest vÀÀrtustest ei kontrollitud. Neuroni vĂ”ime keerulisemat infotöötlust teha ei leidnud tĂ”estust. Hoolimata sellest on koostatud neuroni mudel vĂ”imeline kasulikuks infotöötluseks ja seega hea alus edasisteks uuringuteks.Calcium-based single neuron models have been shown to elicit different modes of synaptic plasticity. In the present study one such model was implemented and its learning behaviour studied. Behaviour of the implemented neuron agreed qualitatively with prior work in all regards except selectivity to correlation in input. The neuron was found to implement a linear filter responding linearly to partial presentations of learned patterns. Simulating probabilistic neurotransmitter release had an expected effect of de-correlating input and was found to improve the efficiency of information transfer. In the regimes explored, the neuron was found to be incapable of performing principal component analysis. The insensitivity of results to changes in parameters was mostly untested. The neuron did not exhibit more advanced information processing capabilities in the tests conducted. However, the implemented neuron model is capable of meaningful information processing and forms a good basis for further research

    Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail

    Get PDF
    Changes of synaptic connections between neurons are thought to be the physiological basis of learning. These changes can be gated by neuromodulators that encode the presence of reward. We study a family of reward-modulated synaptic learning rules for spiking neurons on a learning task in continuous space inspired by the Morris Water maze. The synaptic update rule modifies the release probability of synaptic transmission and depends on the timing of presynaptic spike arrival, postsynaptic action potentials, as well as the membrane potential of the postsynaptic neuron. The family of learning rules includes an optimal rule derived from policy gradient methods as well as reward modulated Hebbian learning. The synaptic update rule is implemented in a population of spiking neurons using a network architecture that combines feedforward input with lateral connections. Actions are represented by a population of hypothetical action cells with strong mexican-hat connectivity and are read out at theta frequency. We show that in this architecture, a standard policy gradient rule fails to solve the Morris watermaze task, whereas a variant with a Hebbian bias can learn the task within 20 trials, consistent with experiments. This result does not depend on implementation details such as the size of the neuronal populations. Our theoretical approach shows how learning new behaviors can be linked to reward-modulated plasticity at the level of single synapses and makes predictions about the voltage and spike-timing dependence of synaptic plasticity and the influence of neuromodulators such as dopamine. It is an important step towards connecting formal theories of reinforcement learning with neuronal and synaptic properties

    Paradoxical Results of Long-Term Potentiation explained by Voltage-based Plasticity Rule

    Full text link
    Experiments have shown that the same stimulation pattern that causes Long-Term Potentiation in proximal synapses, will induce Long-Term Depression in distal ones. In order to understand these, and other, surprising observations we use a phenomenological model of Hebbian plasticity at the location of the synapse. Our computational model describes the Hebbian condition of joint activity of pre- and post-synaptic neuron in a compact form as the interaction of the glutamate trace left by a presynaptic spike with the time course of the postsynaptic voltage. We test the model using experimentally recorded dendritic voltage traces in hippocampus and neocortex. We find that the time course of the voltage in the neighborhood of a stimulated synapse is a reliable predictor of whether a stimulated synapse undergoes potentiation, depression, or no change. Our model can explain the existence of different -- at first glance seemingly paradoxical -- outcomes of synaptic potentiation and depression experiments depending on the dendritic location of the synapse and the frequency or timing of the stimulation

    Towards a Brain-inspired Information Processing System: Modelling and Analysis of Synaptic Dynamics: Towards a Brain-inspired InformationProcessing System: Modelling and Analysis ofSynaptic Dynamics

    Get PDF
    Biological neural systems (BNS) in general and the central nervous system (CNS) specifically exhibit a strikingly efficient computational power along with an extreme flexible and adaptive basis for acquiring and integrating new knowledge. Acquiring more insights into the actual mechanisms of information processing within the BNS and their computational capabilities is a core objective of modern computer science, computational sciences and neuroscience. Among the main reasons of this tendency to understand the brain is to help in improving the quality of life of people suffer from loss (either partial or complete) of brain or spinal cord functions. Brain-computer-interfaces (BCI), neural prostheses and other similar approaches are potential solutions either to help these patients through therapy or to push the progress in rehabilitation. There is however a significant lack of knowledge regarding the basic information processing within the CNS. Without a better understanding of the fundamental operations or sequences leading to cognitive abilities, applications like BCI or neural prostheses will keep struggling to find a proper and systematic way to help patients in this regard. In order to have more insights into these basic information processing methods, this thesis presents an approach that makes a formal distinction between the essence of being intelligent (as for the brain) and the classical class of artificial intelligence, e.g. with expert systems. This approach investigates the underlying mechanisms allowing the CNS to be capable of performing a massive amount of computational tasks with a sustainable efficiency and flexibility. This is the essence of being intelligent, i.e. being able to learn, adapt and to invent. The approach used in the thesis at hands is based on the hypothesis that the brain or specifically a biological neural circuitry in the CNS is a dynamic system (network) that features emergent capabilities. These capabilities can be imported into spiking neural networks (SNN) by emulating the dynamic neural system. Emulating the dynamic system requires simulating both the inner workings of the system and the framework of performing the information processing tasks. Thus, this work comprises two main parts. The first part is concerned with introducing a proper and a novel dynamic synaptic model as a vital constitute of the inner workings of the dynamic neural system. This model represents a balanced integration between the needed biophysical details and being computationally inexpensive. Being a biophysical model is important to allow for the abilities of the target dynamic system to be inherited, and being simple is needed to allow for further implementation in large scale simulations and for hardware implementation in the future. Besides, the energy related aspects of synaptic dynamics are studied and linked to the behaviour of the networks seeking for stable states of activities. The second part of the thesis is consequently concerned with importing the processing framework of the dynamic system into the environment of SNN. This part of the study investigates the well established concept of binding by synchrony to solve the information binding problem and to proposes the concept of synchrony states within SNN. The concepts of computing with states are extended to investigate a computational model that is based on the finite-state machines and reservoir computing. Biological plausible validations of the introduced model and frameworks are performed. Results and discussions of these validations indicate that this study presents a significant advance on the way of empowering the knowledge about the mechanisms underpinning the computational power of CNS. Furthermore it shows a roadmap on how to adopt the biological computational capabilities in computation science in general and in biologically-inspired spiking neural networks in specific. Large scale simulations and the development of neuromorphic hardware are work-in-progress and future work. Among the applications of the introduced work are neural prostheses and bionic automation systems

    Synaptic Transmission Optimization Predicts Expression Loci of Long-Term Plasticity

    Get PDF
    Long-term modifications of neuronal connections are critical for reliable memory storage in the brain. However, their locus of expression—pre- or postsynaptic—is highly variable. Here we introduce a theoretical framework in which long-term plasticity performs an optimization of the postsynaptic response statistics toward a given mean with minimal variance. Consequently, the state of the synapse at the time of plasticity induction determines the ratio of pre- and postsynaptic modifications. Our theory explains the experimentally observed expression loci of the hippocampal and neocortical synaptic potentiation studies we examined. Moreover, the theory predicts presynaptic expression of long-term depression, consistent with experimental observations. At inhibitory synapses, the theory suggests a statistically efficient excitatory-inhibitory balance in which changes in inhibitory postsynaptic response statistics specifically target the mean excitation. Our results provide a unifying theory for understanding the expression mechanisms and functions of long-term synaptic transmission plasticity
    • 

    corecore