580 research outputs found
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
Closed loop interactions between spiking neural network and robotic simulators based on MUSIC and ROS
In order to properly assess the function and computational properties of
simulated neural systems, it is necessary to account for the nature of the
stimuli that drive the system. However, providing stimuli that are rich and yet
both reproducible and amenable to experimental manipulations is technically
challenging, and even more so if a closed-loop scenario is required. In this
work, we present a novel approach to solve this problem, connecting robotics
and neural network simulators. We implement a middleware solution that bridges
the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC).
This enables any robotic and neural simulators that implement the corresponding
interfaces to be efficiently coupled, allowing real-time performance for a wide
range of configurations. This work extends the toolset available for
researchers in both neurorobotics and computational neuroscience, and creates
the opportunity to perform closed-loop experiments of arbitrary complexity to
address questions in multiple areas, including embodiment, agency, and
reinforcement learning
Usage and Scaling of an Open-Source Spiking Multi-Area Model of Monkey Cortex
We are entering an age of `big' computational neuroscience, in which neural
network models are increasing in size and in numbers of underlying data sets.
Consolidating the zoo of models into large-scale models simultaneously
consistent with a wide range of data is only possible through the effort of
large teams, which can be spread across multiple research institutions. To
ensure that computational neuroscientists can build on each other's work, it is
important to make models publicly available as well-documented code. This
chapter describes such an open-source model, which relates the connectivity
structure of all vision-related cortical areas of the macaque monkey with their
resting-state dynamics. We give a brief overview of how to use the executable
model specification, which employs NEST as simulation engine, and show its
runtime scaling. The solutions found serve as an example for organizing the
workflow of future models from the raw experimental data to the visualization
of the results, expose the challenges, and give guidance for the construction
of ICT infrastructure for neuroscience
GeNN: a code generation framework for accelerated brain simulations
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ.
GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials,
Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/
Linking brain structure, activity and cognitive function through computation
Understanding the human brain is a “Grand Challenge” for 21st century research. Computational approaches enable large and complex datasets to be addressed efficiently, supported by artificial neural networks, modeling and simulation. Dynamic generative multiscale models, which enable the investigation of causation across scales and are guided by principles and theories of brain function, are instrumental for linking brain structure and function. An example of a resource enabling such an integrated approach to neuroscientific discovery is the BigBrain, which spatially anchors tissue models and data across different scales and ensures that multiscale models are supported by the data, making the bridge to both basic neuroscience and medicine. Research at the intersection of neuroscience, computing and robotics has the potential to advance neuro-inspired technologies by taking advantage of a growing body of insights into perception, plasticity and learning. To render data, tools and methods, theories, basic principles and concepts interoperable, the Human Brain Project (HBP) has launched EBRAINS, a digital neuroscience research infrastructure, which brings together a transdisciplinary community of researchers united by the quest to understand the brain, with fascinating insights and perspectives for societal benefits
Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure
Simulating the brain-body-environment trinity in closed loop is an attractive proposal
to investigate how perception, motor activity and interactions with the environment
shape brain activity, and vice versa. The relevance of this embodied approach, however,
hinges entirely on the modeled complexity of the various simulated phenomena. In this
article, we introduce a software framework that is capable of simulating large-scale,
biologically realistic networks of spiking neurons embodied in a biomechanically accurate
musculoskeletal system that interacts with a physically realistic virtual environment. We
deploy this framework on the high performance computing resources of the EBRAINS
research infrastructure and we investigate the scaling performance by distributing
computation across an increasing number of interconnected compute nodes. Our
architecture is based on requested compute nodes as well as persistent virtualmachines;
this provides a high-performance simulation environment that is accessible to multidomain
users without expert knowledge, with a view to enable users to instantiate
and control simulations at custom scale via a web-based graphical user interface. Our
simulation environment, entirely open source, is based on the Neurorobotics Platform
developed in the context of the Human Brain Project, and the NEST simulator. We
characterize the capabilities of our parallelized architecture for large-scale embodied
brain simulations through two benchmark experiments, by investigating the effects of
scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a largescale
balanced network, while the second one is a multi-region embodied brain
simulation consisting of more than a million neurons and a billion synapses. Both
benchmarks clearly show how scaling compute resources improves the aforementioned
performance metrics in a near-linear fashion. The second benchmark in particular is
indicative of both the potential and limitations of a highly distributed simulation in
terms of a trade-off between computation speed and resource cost. Our simulation
architecture is being prepared to be accessible for everyone as an EBRAINS service,
thereby offering a community-wide tool with a unique workflow that should provide
momentum to the investigation of closed-loop embodiment within the computational
neuroscience community.European Union’s Horizon
2020 Framework Programme 785907 945539European Union’s Horizon
2020 800858MEXT (hp200139, hp210169) MEXT KAKENHI grant
no. 17H06310
Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.journal articl
- …