656 research outputs found

    A synaptic learning rule for exploiting nonlinear dendritic computation

    Get PDF
    Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons

    Analog VLSI circuit design of spike-timing-dependent synaptic plasticity

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Cataloged from PDF version of thesis.Includes bibliographical references (p. 61-63).Synaptic plasticity is the ability of a synaptic connection to change in strength and is believed to be the basis for learning and memory. Currently, two types of synaptic plasticity exist. First is the spike-timing-dependent-plasticity (STDP), a timing-based protocol that suggests that the efficacy of synaptic connections is modulated by the relative timing between presynaptic and postsynaptic stimuli. The second type is the Bienenstock-Cooper-Munro (BCM) learning rule, a classical ratebased protocol which states that the rate of presynaptic stimulation modulates the synaptic strength. Several theoretical models were developed to explain the two forms of plasticity but none of these models came close in identifying the biophysical mechanism of plasticity. Other studies focused instead on developing neuromorphic systems of synaptic plasticity. These systems used simple curve fitting methods that were able to reproduce some types of STDP but still failed to shed light on the biophysical basis of STDP. Furthermore, none of these neuromorphic systems were able to reproduce the various forms of STDP and relate them to the BCM rule. However, a recent discovery resulted in a new unified model that explains the general biophysical process governing synaptic plasticity using fundamental ideas regarding the biochemical reactions and kinetics within the synapse. This brilliant model considers all types of STDP and relates them to the BCM rule, giving us a fresh new approach to construct a unique system that overcomes all the challenges that existing neuromorphic systems faced. Here, we propose a novel analog verylarge- scale-integration (aVLSI) circuit that successfully and accurately captures the whole picture of synaptic plasticity based from the results of this latest unified model. Our circuit was tested for all types of STDP and for each of these tests, our design was able to reproduce the results predicted by the new-found model. Two inputs are required by the system, a glutamate signal that carries information about the presynaptic stimuli and a dendritic action potential signal that contains information about the postsynaptic stimuli. These two inputs give rise to changes in the excitatory postsynaptic current which represents the modifiable synaptic efficacy output. Finally, we also present several techniques and alternative circuit designs that will further improve the performance of our neuromorphic system.by Joshua Jen C. Monzon.M.Eng

    Shaping of Spike-Timing-Dependent Plasticity curve using interneuron and calcium dynamics

    Get PDF
    The field of Computational Neuroscience is where neuroscience and computational modelling merge together. It is an ever-emerging area of research where the level of biological modelling can range from small-scale cellular models, to the larger network scale models. This MSc Thesis will detail the research carried out when looking at a small network of two neurons. These neurons have been modelled with a high level of detail, with the intention of using it to study the phenomenon of Spike-Timing-Dependent Plasticity (or STDP). Spike-Timing-Dependent Plasticity is the occurrence of either a strengthening or weakening in connection between two neurons, depending on the temporal order of stimulation between them. A major part of the work detailed is the focus on what mechanisms are responsible for these changes in plasticity, with the goal of representing the mechanisms in a single learning rule. The results found can be directly compared to data previously seen by scientists who worked on in-vitro experiments. The research then goes on to look at further applications of the model, in particular, looking at certain deficits seen in people with Schizophrenia. We modify the model to include these cellular impairments, then observe how this affects the standard STDP curve and thus affects the strengthening/weakening between the two neurons

    Advances in Neural Signal Processing

    Get PDF
    Neural signal processing is a specialized area of signal processing aimed at extracting information or decoding intent from neural signals recorded from the central or peripheral nervous system. This has significant applications in the areas of neuroscience and neural engineering. These applications are famously known in the area of brain–machine interfaces. This book presents recent advances in this flourishing field of neural signal processing with demonstrative applications

    Neural Integrator: A Sandpile Model

    Get PDF
    We investigated a model for the neural integrator based on hysteretic units connected by positive feedback. Hysteresis is assumed to emerge from the intrinsic properties of the cells. We consider the recurrent networks containing either bistable or multistable neurons. We apply our analysis to the oculomotor velocity-to-position neural integrator that calculates eye positions using the inputs that carry information about eye angular velocity. By analyzing this system in the parameter space, we show the following. The direction of hysteresis in the neuronal response may be reversed for the system with recurrent connections compared to the case of unconnected neurons. Thus, for the NMDA receptor-based bistability, the firing rates after ON saccades may be higher than after OFF saccades for the same eye position. The reversal of hysteresis occurs in this model only when the size of hysteresis differs from neuron to neuron. We also relate the macroscopic leak time constant of the integrator to the rate of microscopic spontaneous noise-driven transitions in the hysteretic units. Finally, we investigate the conditions under which the hysteretic integrator may have no threshold for integration

    Advances in Neural Signal Processing

    Get PDF
    Neural signal processing is a specialized area of signal processing aimed at extracting information or decoding intent from neural signals recorded from the central or peripheral nervous system. This has significant applications in the areas of neuroscience and neural engineering. These applications are famously known in the area of brain–machine interfaces. This book presents recent advances in this flourishing field of neural signal processing with demonstrative applications

    Assesment, integration and implementation of computationally efficient models to simulate biological neuronal networks on parallel hardware

    Get PDF
    The simulation of large-scale biological spiking neural networks (SNN) is computationally onerous. In this work we develop a simulation tool exploiting the computational capabilities of modern graphic processors (GPUs) to speed up simulations of SNNs closely mimicking physiological phenomena. Different models of these phenomena are analyzed, and the best in terms of predictions and compatibility with the SIMD architecture of GPUs are implemented. The performances of the simulator are evaluated
    • …
    corecore