5,702 research outputs found

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Synaptic plasticity in medial vestibular nucleus neurons: comparison with computational requirements of VOR adaptation

    Get PDF
    Background: Vestibulo-ocular reflex (VOR) gain adaptation, a longstanding experimental model of cerebellar learning, utilizes sites of plasticity in both cerebellar cortex and brainstem. However, the mechanisms by which the activity of cortical Purkinje cells may guide synaptic plasticity in brainstem vestibular neurons are unclear. Theoretical analyses indicate that vestibular plasticity should depend upon the correlation between Purkinje cell and vestibular afferent inputs, so that, in gain-down learning for example, increased cortical activity should induce long-term depression (LTD) at vestibular synapses. Methodology/Principal Findings: Here we expressed this correlational learning rule in its simplest form, as an anti-Hebbian, heterosynaptic spike-timing dependent plasticity interaction between excitatory (vestibular) and inhibitory (floccular) inputs converging on medial vestibular nucleus (MVN) neurons (input-spike-timing dependent plasticity, iSTDP). To test this rule, we stimulated vestibular afferents to evoke EPSCs in rat MVN neurons in vitro. Control EPSC recordings were followed by an induction protocol where membrane hyperpolarizing pulses, mimicking IPSPs evoked by flocculus inputs, were paired with single vestibular nerve stimuli. A robust LTD developed at vestibular synapses when the afferent EPSPs coincided with membrane hyperpolarisation, while EPSPs occurring before or after the simulated IPSPs induced no lasting change. Furthermore, the iSTDP rule also successfully predicted the effects of a complex protocol using EPSP trains designed to mimic classical conditioning. Conclusions: These results, in strong support of theoretical predictions, suggest that the cerebellum alters the strength of vestibular synapses on MVN neurons through hetero-synaptic, anti-Hebbian iSTDP. Since the iSTDP rule does not depend on post-synaptic firing, it suggests a possible mechanism for VOR adaptation without compromising gaze-holding and VOR performance in vivo

    Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of neoHebbian Three-Factor Learning Rules

    Full text link
    Most elementary behaviors such as moving the arm to grasp an object or walking into the next room to explore a museum evolve on the time scale of seconds; in contrast, neuronal action potentials occur on the time scale of a few milliseconds. Learning rules of the brain must therefore bridge the gap between these two different time scales. Modern theories of synaptic plasticity have postulated that the co-activation of pre- and postsynaptic neurons sets a flag at the synapse, called an eligibility trace, that leads to a weight change only if an additional factor is present while the flag is set. This third factor, signaling reward, punishment, surprise, or novelty, could be implemented by the phasic activity of neuromodulators or specific neuronal inputs signaling special events. While the theoretical framework has been developed over the last decades, experimental evidence in support of eligibility traces on the time scale of seconds has been collected only during the last few years. Here we review, in the context of three-factor rules of synaptic plasticity, four key experiments that support the role of synaptic eligibility traces in combination with a third factor as a biological implementation of neoHebbian three-factor learning rules

    Connectivity reflects coding: A model of voltage-based spike-timing-dependent-plasticity with homeostasis

    Get PDF
    Electrophysiological connectivity patterns in cortex often show a few strong connections in a sea of weak connections. In some brain areas a large fraction of strong connections are bidirectional, in others they are mainly unidirectional. In order to explain these connectivity patterns, we use a model of Spike-Timing-Dependent Plasticity where synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential. The model describes several nonlinear effects in STDP experiments, as well as the voltage dependence of plasticity under voltage clamp and classical paradigms of LTP/LTD induction. We show that in a simulated recurrent network of spiking neurons our plasticity rule leads not only to receptive field development, but also to connectivity patterns that reflect the neural code: for temporal coding paradigms strong connections are predominantly unidirectional, whereas they are bidirectional under rate coding. Thus variable connectivity patterns in the brain could reflect different coding principles across brain areas

    Spike Timing-Dependent Plasticity as the Origin of the Formation of Clustered Synaptic Efficacy Engrams

    Get PDF
    Synapse location, dendritic active properties and synaptic plasticity are all known to play some role in shaping the different input streams impinging onto a neuron. It remains unclear however, how the magnitude and spatial distribution of synaptic efficacies emerge from this interplay. Here, we investigate this interplay using a biophysically detailed neuron model of a reconstructed layer 2/3 pyramidal cell and spike timing-dependent plasticity (STDP). Specifically, we focus on the issue of how the efficacy of synapses contributed by different input streams are spatially represented in dendrites after STDP learning. We construct a simple feed forward network where a detailed model neuron receives synaptic inputs independently from multiple yet equally sized groups of afferent fibers with correlated activity, mimicking the spike activity from different neuronal populations encoding, for example, different sensory modalities. Interestingly, ensuing STDP learning, we observe that for all afferent groups, STDP leads to synaptic efficacies arranged into spatially segregated clusters effectively partitioning the dendritic tree. These segregated clusters possess a characteristic global organization in space, where they form a tessellation in which each group dominates mutually exclusive regions of the dendrite. Put simply, the dendritic imprint from different input streams left after STDP learning effectively forms what we term a “dendritic efficacy mosaic.” Furthermore, we show how variations of the inputs and STDP rule affect such an organization. Our model suggests that STDP may be an important mechanism for creating a clustered plasticity engram, which shapes how different input streams are spatially represented in dendrite

    Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface

    Full text link
    Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). What are the neuronal mechanisms responsible for these changes and how does targeted stimulation by a BBCI shape population-level synaptic connectivity? The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites are strengthened for spike-stimulus delays consistent with experimentally derived spike time dependent plasticity (STDP) rules. However, the relationship between STDP mechanisms at the level of networks, and their modification with neural implants remains poorly understood. Using our model, we successfully reproduces key experimental results and use analytical derivations, along with novel experimental data. We then derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered stimulation in different regimes of cortical activity.Comment: 35 pages, 9 figure

    Modelling plasticity in dendrites: from single cells to networks

    Get PDF
    One of the key questions in neuroscience is how our brain self-organises to efficiently process information. To answer this question, we need to understand the underlying mechanisms of plasticity and their role in shaping synaptic connectivity. Theoretical neuroscience typically investigates plasticity on the level of neural networks. Neural network models often consist of point neurons, completely neglecting neuronal morphology for reasons of simplicity. However, during the past decades it became increasingly clear that inputs are locally processed in the dendrites before they reach the cell body. Dendritic properties enable local interactions between synapses and location-dependent modulations of inputs, rendering the position of synapses on dendrites highly important. These insights changed our view of neurons, such that we now think of them as small networks of nearly independent subunits instead of a simple point. Here, we propose that understanding how the brain processes information strongly requires that we consider the following properties: which plasticity mechanisms are present in the dendrites and how do they enable the self-organisation of synapses across the dendritic tree for efficient information processing? Ultimately, dendritic plasticity mechanisms can be studied in networks of neurons with dendrites, possibly uncovering unknown mechanisms that shape the connectivity in our brains
    corecore