20 research outputs found

    NESTML – Creating a Neuron Modeling Language and Generating Efficient Code for the NEST Simulator with MontiCore

    No full text
    Neuroscientists use computer simulations as one way to research the brain and brain activity. They developed and published numerous neuron and synapse models with different levels of detail to be used in simulations of single neurons or large biological neuronal networks. Besides the neuron and synapse models, the neuroscience commu- nity has developed several simulators with different scope and, mostly, incompatible model description languages. This makes it hard to develop and publish new neuron and synapse models and even harder to compare and verify findings across simulators, since the model must be implemented and adjusted for every simulator.This thesis describes the design of NESTML and its development with the MontiCore framework. NESTML is an extendable modeling language for the neuroscience do- main. It allows modeling spiking point neuron models in a clean and concise syntax. An associated processing tool performs static analysis on NESTML models to check for programmatic correctness and, thus, supports neuroscientists in creating new neuron models. Furthermore, it generates efficient code for the NEST simulator and the NEST module infrastructure, which allows to easily compile and load the generated code into NEST. This reduces the work to create and to maintain neuron models for NEST and, by adding more simulator targets in the future, across simulators

    Massively Parallel Neuronal Network Model Construction

    No full text
    Biological neuronal networks models can be investigated with the NEST simulator (Gewaltig and Diesmann, 2007). Being a hybrid OpenMP and MPI parallel application, NEST is already capable of simulating neuronal networks of spiking point neurons of the size of 1% of the human brain (Kunkel et al., 2014). Beside efficient parallel simulation of these networks, their construction becomes more relevant. Current neuronal network sizes span multiple orders of magnitude and future investigations of the brain will require more complex and larger networks. While Kunkel et al. (2014) presented highly optimized data structures that allow the representation and simulation of neuronal networks on the scale of rodent and cat brains, the time required to create these networks in the simulator becomes impractical. Hence, efficient parallel construction algorithms, which exploit the capabilities of current and future compute hardware, are necessary to perform these large scale simulations. We present here our ongoing work to provide efficient and scalable algorithms to construct brain-scale neuronal networks.The number of cores on single compute nodes are constantly increasing. When using MPI-based parallelization only, each rank has to store MPI-related data-structures, which entails an overhead compared to a shared memory (OpenMP) parallelization. However, previous implementations of parallelized neuronal network construction did not scale well when using OpenMP. We find that this is caused by the massive parallel memory allocation during the wiring phase. Using memory allocators specialized for thread-parallel memory allocation (Evans, 2006, Ghemawat, 2007, Kukanov, 2007) makes thread-parallel wiring scalable again.Constructing neuronal networks in large compute-cluster- and supercomputer-scenarios shows sub- optimal wiring performance as well. We find that most of the wiring time is spent by idling none-local target neurons. By refactoring the algorithms to enable the iteration over local target neurons only, we achieve good wiring performance in these scenarios.With these optimizations in place, we gain scalable construction of neuronal networks from single compute node to supercomputer simulations. On concrete network models we observed twenty times faster neuronal network construction. These performance enhancements will allow computational neuroscientists to perform significantly more comprehensive in silico experiments within the tight limits of available supercomputer resources. Studies on the relation between network structure and dynamics will benefit especially, since these typically require the randomized instantiation of large numbers of networks. Experiments scanning network parameter space will benefit equally. Finally, by exploiting energy-hungry supercomputer resources more efficiently, our work also helps to reduce the overall energy consumption and thus the carbon footprint of computational neuroscience

    Constructing Neuronal Network Models in Massively Parallel Environments

    No full text
    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers

    NESTML: a modeling language for spiking neurons

    Get PDF
    Biological nervous systems exhibit astonishing complexity. Neuroscientists aim to capture this complexityby modeling and simulation of biological processes. Often very complex models are necessaryto depict the processes, which makes it difficult to create these models. Powerful tools arethus necessary, which enable neuroscientists to express models in a comprehensive and concise wayand generate efficient code for digital simulations. Several modeling languages for computationalneuroscience have been proposed [Gl10, Ra11]. However, as these languages seek simulator independencethey typically only support a subset of the features desired by the modeler. In this article,we present the modular and extensible domain specific language NESTML, which provides neurosciencedomain concepts as first-class language constructs and supports domain experts in creatingneuron models for the neural simulation tool NEST. NESTML and a set of example models arepublically available on GitHub

    Constructing neuronal network models in massively parallel environments

    No full text
    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers

    Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    No full text
    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems

    Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Get PDF
    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems
    corecore