3,689 research outputs found
Closed loop interactions between spiking neural network and robotic simulators based on MUSIC and ROS
In order to properly assess the function and computational properties of
simulated neural systems, it is necessary to account for the nature of the
stimuli that drive the system. However, providing stimuli that are rich and yet
both reproducible and amenable to experimental manipulations is technically
challenging, and even more so if a closed-loop scenario is required. In this
work, we present a novel approach to solve this problem, connecting robotics
and neural network simulators. We implement a middleware solution that bridges
the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC).
This enables any robotic and neural simulators that implement the corresponding
interfaces to be efficiently coupled, allowing real-time performance for a wide
range of configurations. This work extends the toolset available for
researchers in both neurorobotics and computational neuroscience, and creates
the opportunity to perform closed-loop experiments of arbitrary complexity to
address questions in multiple areas, including embodiment, agency, and
reinforcement learning
Simulation of networks of spiking neurons: A review of tools and strategies
We review different aspects of the simulation of spiking neural networks. We
start by reviewing the different types of simulation strategies and algorithms
that are currently implemented. We next review the precision of those
simulation strategies, in particular in cases where plasticity depends on the
exact timing of the spikes. We overview different simulators and simulation
environments presently available (restricted to those freely available, open
source and documented). For each simulation tool, its advantages and pitfalls
are reviewed, with an aim to allow the reader to identify which simulator is
appropriate for a given task. Finally, we provide a series of benchmark
simulations of different types of networks of spiking neurons, including
Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based
or conductance-based synapses, using clock-driven or event-driven integration
strategies. The same set of models are implemented on the different simulators,
and the codes are made available. The ultimate goal of this review is to
provide a resource to facilitate identifying the appropriate integration
strategy and simulation tool to use for a given modeling problem related to
spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of
Computational Neuroscience, in press (2007
Phenomenological modeling of diverse and heterogeneous synaptic dynamics at natural density
This chapter sheds light on the synaptic organization of the brain from the
perspective of computational neuroscience. It provides an introductory overview
on how to account for empirical data in mathematical models, implement them in
software, and perform simulations reflecting experiments. This path is
demonstrated with respect to four key aspects of synaptic signaling: the
connectivity of brain networks, synaptic transmission, synaptic plasticity, and
the heterogeneity across synapses. Each step and aspect of the modeling and
simulation workflow comes with its own challenges and pitfalls, which are
highlighted and addressed in detail.Comment: 35 pages, 3 figure
GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when simulating a highly-connected cortical model
While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimisation for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field and are now used in 50% of the Top 10 super computing sites worldwide. In this paper we use our GeNN code generator to re-implement two neo-cortex-inspired, circuit-scale, point neuron network models on GPU hardware. We verify the correctness of our GPU simulations against prior results obtained with NEST running on traditional HPC hardware and compare the performance with respect to speed and energy consumption against published data from CPU-based HPC and neuromorphic hardware. A full-scale model of a cortical column can be simulated at speeds approaching 0.5× real-time using a single NVIDIA Tesla V100 accelerator – faster than is currently possible using a CPU based cluster or the SpiNNaker neuromorphic system. In addition, we find that, across a range of GPU systems, the energy to solution as well as the energy per synaptic event of the microcircuit simulation is as much as 14× lower than either on SpiNNaker or in CPU-based simulations. Besides performance in terms of speed and energy consumption of the simulation, efficient initialisation of models is also a crucial concern, particularly in a research context where repeated runs and parameter-space exploration are required. Therefore, we also introduce in this paper some of the novel parallel initialisation methods implemented in the latest version of GeNN and demonstrate how they can enable further speed and energy advantages
Neuronal assembly dynamics in supervised and unsupervised learning scenarios
The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions
Stochastic Representations of Ion Channel Kinetics and Exact Stochastic Simulation of Neuronal Dynamics
In this paper we provide two representations for stochastic ion channel
kinetics, and compare the performance of exact simulation with a commonly used
numerical approximation strategy. The first representation we present is a
random time change representation, popularized by Thomas Kurtz, with the second
being analogous to a "Gillespie" representation. Exact stochastic algorithms
are provided for the different representations, which are preferable to either
(a) fixed time step or (b) piecewise constant propensity algorithms, which
still appear in the literature. As examples, we provide versions of the exact
algorithms for the Morris-Lecar conductance based model, and detail the error
induced, both in a weak and a strong sense, by the use of approximate
algorithms on this model. We include ready-to-use implementations of the random
time change algorithm in both XPP and Matlab. Finally, through the
consideration of parametric sensitivity analysis, we show how the
representations presented here are useful in the development of further
computational methods. The general representations and simulation strategies
provided here are known in other parts of the sciences, but less so in the
present setting.Comment: 39 pages, 6 figures, appendix with XPP and Matlab cod
- …