254 research outputs found

    PyNEST: A Convenient Interface to the NEST Simulator

    Get PDF
    The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used

    Breaking Synchrony by Heterogeneity in Complex Networks

    Full text link
    For networks of pulse-coupled oscillators with complex connectivity, we demonstrate that in the presence of coupling heterogeneity precisely timed periodic firing patterns replace the state of global synchrony that exists in homogenous networks only. With increasing disorder, these patterns persist until they reach a critical temporal extent that is of the order of the interaction delay. For stronger disorder these patterns cease to exist and only asynchronous, aperiodic states are observed. We derive self-consistency equations to predict the precise temporal structure of a pattern from the network heterogeneity. Moreover, we show how to design heterogenous coupling architectures to create an arbitrary prescribed pattern.Comment: 4 pages, 3 figure

    The Scientific Case for Brain Simulators

    Get PDF
    A key element of the European Union’s Human Brain Project (HBP) and other large-scale brain research projects is the simulation of large-scale model networks of neurons. Here, we argue why such simulations will likely be indispensable for bridging the scales between the neuron and system levels in the brain, and why a set of brain simulators based on neuron models at different levels of biological detail should therefore be developed. To allow for systematic refinement of candidate network models by comparison with experiments, the simulations should be multimodal in the sense that they should predict not only action potentials, but also electric, magnetic, and optical signals measured at the population and system levels

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Theory of Interaction of Memory Patterns in Layered Associative Networks

    Full text link
    A synfire chain is a network that can generate repeated spike patterns with millisecond precision. Although synfire chains with only one activity propagation mode have been intensively analyzed with several neuron models, those with several stable propagation modes have not been thoroughly investigated. By using the leaky integrate-and-fire neuron model, we constructed a layered associative network embedded with memory patterns. We analyzed the network dynamics with the Fokker-Planck equation. First, we addressed the stability of one memory pattern as a propagating spike volley. We showed that memory patterns propagate as pulse packets. Second, we investigated the activity when we activated two different memory patterns. Simultaneous activation of two memory patterns with the same strength led the propagating pattern to a mixed state. In contrast, when the activations had different strengths, the pulse packet converged to a two-peak state. Finally, we studied the effect of the preceding pulse packet on the following pulse packet. The following pulse packet was modified from its original activated memory pattern, and it converged to a two-peak state, mixed state or non-spike state depending on the time interval

    Signal Propagation in Feedforward Neuronal Networks with Unreliable Synapses

    Full text link
    In this paper, we systematically investigate both the synfire propagation and firing rate propagation in feedforward neuronal network coupled in an all-to-all fashion. In contrast to most earlier work, where only reliable synaptic connections are considered, we mainly examine the effects of unreliable synapses on both types of neural activity propagation in this work. We first study networks composed of purely excitatory neurons. Our results show that both the successful transmission probability and excitatory synaptic strength largely influence the propagation of these two types of neural activities, and better tuning of these synaptic parameters makes the considered network support stable signal propagation. It is also found that noise has significant but different impacts on these two types of propagation. The additive Gaussian white noise has the tendency to reduce the precision of the synfire activity, whereas noise with appropriate intensity can enhance the performance of firing rate propagation. Further simulations indicate that the propagation dynamics of the considered neuronal network is not simply determined by the average amount of received neurotransmitter for each neuron in a time instant, but also largely influenced by the stochastic effect of neurotransmitter release. Second, we compare our results with those obtained in corresponding feedforward neuronal networks connected with reliable synapses but in a random coupling fashion. We confirm that some differences can be observed in these two different feedforward neuronal network models. Finally, we study the signal propagation in feedforward neuronal networks consisting of both excitatory and inhibitory neurons, and demonstrate that inhibition also plays an important role in signal propagation in the considered networks.Comment: 33pages, 16 figures; Journal of Computational Neuroscience (published
    corecore