54 research outputs found
Usage and Scaling of an Open-Source Spiking Multi-Area Model of Monkey Cortex
We are entering an age of `big' computational neuroscience, in which neural
network models are increasing in size and in numbers of underlying data sets.
Consolidating the zoo of models into large-scale models simultaneously
consistent with a wide range of data is only possible through the effort of
large teams, which can be spread across multiple research institutions. To
ensure that computational neuroscientists can build on each other's work, it is
important to make models publicly available as well-documented code. This
chapter describes such an open-source model, which relates the connectivity
structure of all vision-related cortical areas of the macaque monkey with their
resting-state dynamics. We give a brief overview of how to use the executable
model specification, which employs NEST as simulation engine, and show its
runtime scaling. The solutions found serve as an example for organizing the
workflow of future models from the raw experimental data to the visualization
of the results, expose the challenges, and give guidance for the construction
of ICT infrastructure for neuroscience
Characterization and optimization of network traffic in cortical simulation
Considering the great variety of obstacles the Exascale systems
have to face in the next future, a deeper attention will be given in this thesis
to the interconnect and the power consumption.
The data movement challenge involves the whole hierarchical organization
of components in HPC systems — i.e. registers, cache, memory, disks.
Running scientific applications needs to provide the most effective methods
of data transport among the levels of hierarchy. On current petaflop systems,
memory access at all the levels is the limiting factor in almost all applications.
This drives the requirement for an interconnect achieving adequate rates of
data transfer, or throughput, and reducing time delays, or latency, between
the levels.
Power consumption is identified as the largest hardware research challenge.
The annual power cost to operate the system would be above 2.5 B$
per year for an Exascale system using current technology. The research for alternative
power-efficient computing device is mandatory for the procurement
of the future HPC systems.
In this thesis, a preliminary approach will be offered to the critical process of
co-design. Co-desing is defined as the simultaneos design of both hardware
and software, to implement a desired function. This process both integrates
all components of the Exascale initiative and illuminates the trade-offs that
must be made within this complex undertaking
Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure
Simulating the brain-body-environment trinity in closed loop is an attractive proposal
to investigate how perception, motor activity and interactions with the environment
shape brain activity, and vice versa. The relevance of this embodied approach, however,
hinges entirely on the modeled complexity of the various simulated phenomena. In this
article, we introduce a software framework that is capable of simulating large-scale,
biologically realistic networks of spiking neurons embodied in a biomechanically accurate
musculoskeletal system that interacts with a physically realistic virtual environment. We
deploy this framework on the high performance computing resources of the EBRAINS
research infrastructure and we investigate the scaling performance by distributing
computation across an increasing number of interconnected compute nodes. Our
architecture is based on requested compute nodes as well as persistent virtualmachines;
this provides a high-performance simulation environment that is accessible to multidomain
users without expert knowledge, with a view to enable users to instantiate
and control simulations at custom scale via a web-based graphical user interface. Our
simulation environment, entirely open source, is based on the Neurorobotics Platform
developed in the context of the Human Brain Project, and the NEST simulator. We
characterize the capabilities of our parallelized architecture for large-scale embodied
brain simulations through two benchmark experiments, by investigating the effects of
scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a largescale
balanced network, while the second one is a multi-region embodied brain
simulation consisting of more than a million neurons and a billion synapses. Both
benchmarks clearly show how scaling compute resources improves the aforementioned
performance metrics in a near-linear fashion. The second benchmark in particular is
indicative of both the potential and limitations of a highly distributed simulation in
terms of a trade-off between computation speed and resource cost. Our simulation
architecture is being prepared to be accessible for everyone as an EBRAINS service,
thereby offering a community-wide tool with a unique workflow that should provide
momentum to the investigation of closed-loop embodiment within the computational
neuroscience community.European Union’s Horizon
2020 Framework Programme 785907 945539European Union’s Horizon
2020 800858MEXT (hp200139, hp210169) MEXT KAKENHI grant
no. 17H06310
Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.journal articl
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
Simulation Intelligence: Towards a New Generation of Scientific Methods
The original "Seven Motifs" set forth a roadmap of essential methods for the
field of scientific computing, where a motif is an algorithmic method that
captures a pattern of computation and data movement. We present the "Nine
Motifs of Simulation Intelligence", a roadmap for the development and
integration of the essential algorithms necessary for a merger of scientific
computing, scientific simulation, and artificial intelligence. We call this
merger simulation intelligence (SI), for short. We argue the motifs of
simulation intelligence are interconnected and interdependent, much like the
components within the layers of an operating system. Using this metaphor, we
explore the nature of each layer of the simulation intelligence operating
system stack (SI-stack) and the motifs therein: (1) Multi-physics and
multi-scale modeling; (2) Surrogate modeling and emulation; (3)
Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based
modeling; (6) Probabilistic programming; (7) Differentiable programming; (8)
Open-ended optimization; (9) Machine programming. We believe coordinated
efforts between motifs offers immense opportunity to accelerate scientific
discovery, from solving inverse problems in synthetic biology and climate
science, to directing nuclear energy experiments and predicting emergent
behavior in socioeconomic settings. We elaborate on each layer of the SI-stack,
detailing the state-of-art methods, presenting examples to highlight challenges
and opportunities, and advocating for specific ways to advance the motifs and
the synergies from their combinations. Advancing and integrating these
technologies can enable a robust and efficient hypothesis-simulation-analysis
type of scientific method, which we introduce with several use-cases for
human-machine teaming and automated science
A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components
Brain-Inspired Computing
This open access book constitutes revised selected papers from the 4th International Workshop on Brain-Inspired Computing, BrainComp 2019, held in Cetraro, Italy, in July 2019. The 11 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They deal with research on brain atlasing, multi-scale models and simulation, HPC and data infra-structures for neuroscience as well as artificial and natural neural architectures
Bringing Anatomical Information into Neuronal Network Models
For constructing neuronal network models computational neuroscientists have
access to wide-ranging anatomical data that nevertheless tend to cover only a
fraction of the parameters to be determined. Finding and interpreting the most
relevant data, estimating missing values, and combining the data and estimates
from various sources into a coherent whole is a daunting task. With this
chapter we aim to provide guidance to modelers by describing the main types of
anatomical data that may be useful for informing neuronal network models. We
further discuss aspects of the underlying experimental techniques relevant to
the interpretation of the data, list particularly comprehensive data sets, and
describe methods for filling in the gaps in the experimental data. Such methods
of `predictive connectomics' estimate connectivity where the data are lacking
based on statistical relationships with known quantities. It is instructive,
and in certain cases necessary, to use organizational principles that link the
plethora of data within a unifying framework where regularities of brain
structure can be exploited to inform computational models. In addition, we
touch upon the most prominent features of brain organization that are likely to
influence predicted neuronal network dynamics, with a focus on the mammalian
cerebral cortex. Given the still existing need for modelers to navigate a
complex data landscape full of holes and stumbling blocks, it is vital that the
field of neuroanatomy is moving toward increasingly systematic data collection,
representation, and publication
- …