556 research outputs found
The Brain on Low Power Architectures - Efficient Simulation of Cortical Slow Waves and Asynchronous States
Efficient brain simulation is a scientific grand challenge, a
parallel/distributed coding challenge and a source of requirements and
suggestions for future computing architectures. Indeed, the human brain
includes about 10^15 synapses and 10^11 neurons activated at a mean rate of
several Hz. Full brain simulation poses Exascale challenges even if simulated
at the highest abstraction level. The WaveScalES experiment in the Human Brain
Project (HBP) has the goal of matching experimental measures and simulations of
slow waves during deep-sleep and anesthesia and the transition to other brain
states. The focus is the development of dedicated large-scale
parallel/distributed simulation technologies. The ExaNeSt project designs an
ARM-based, low-power HPC architecture scalable to million of cores, developing
a dedicated scalable interconnect system, and SWA/AW simulations are included
among the driving benchmarks. At the joint between both projects is the INFN
proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation
engine. DPSNN can be configured to stress either the networking or the
computation features available on the execution platforms. The simulation
stresses the networking component when the neural net - composed by a
relatively low number of neurons, each one projecting thousands of synapses -
is distributed over a large number of hardware cores. When growing the number
of neurons per core, the computation starts to be the dominating component for
short range connections. This paper reports about preliminary performance
results obtained on an ARM-based HPC prototype developed in the framework of
the ExaNeSt project. Furthermore, a comparison is given of instantaneous power,
total energy consumption, execution time and energetic cost per synaptic event
of SWA/AW DPSNN simulations when executed on either ARM- or Intel-based server
platforms
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections
Cortical synapse organization supports a range of dynamic states on multiple
spatial and temporal scales, from synchronous slow wave activity (SWA),
characteristic of deep sleep or anesthesia, to fluctuating, asynchronous
activity during wakefulness (AW). Such dynamic diversity poses a challenge for
producing efficient large-scale simulations that embody realistic metaphors of
short- and long-range synaptic connectivity. In fact, during SWA and AW
different spatial extents of the cortical tissue are active in a given timespan
and at different firing rates, which implies a wide variety of loads of local
computation and communication. A balanced evaluation of simulation performance
and robustness should therefore include tests of a variety of cortical dynamic
states. Here, we demonstrate performance scaling of our proprietary Distributed
and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and
AW for bidimensional grids of neural populations, which reflects the modular
organization of the cortex. We explored networks up to 192x192 modules, each
composed of 1250 integrate-and-fire neurons with spike-frequency adaptation,
and exponentially decaying inter-modular synaptic connectivity with varying
spatial decay constant. For the largest networks the total number of synapses
was over 70 billion. The execution platform included up to 64 dual-socket
nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz
clock rates. Network initialization time, memory usage, and execution time
showed good scaling performances from 1 to 1024 processes, implemented using
the standard Message Passing Interface (MPI) protocol. We achieved simulation
speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both
cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
Real-time cortical simulations: energy and interconnect scaling on distributed systems
We profile the impact of computation and inter-processor communication on the
energy consumption and on the scaling of cortical simulations approaching the
real-time regime on distributed computing platforms. Also, the speed and energy
consumption of processor architectures typical of standard HPC and embedded
platforms are compared. We demonstrate the importance of the design of
low-latency interconnect for speed and energy consumption. The cost of cortical
simulations is quantified using the Joule per synaptic event metric on both
architectures. Reaching efficient real-time on large scale cortical simulations
is of increasing relevance for both future bio-inspired artificial intelligence
applications and for understanding the cognitive functions of the brain, a
scientific quest that will require to embed large scale simulations into highly
complex virtual or real worlds. This work stands at the crossroads between the
WaveScalES experiment in the Human Brain Project (HBP), which includes the
objective of large scale thalamo-cortical simulations of brain states and their
transitions, and the ExaNeSt and EuroExa projects, that investigate the design
of an ARM-based, low-power High Performance Computing (HPC) architecture with a
dedicated interconnect scalable to million of cores; simulation of deep sleep
Slow Wave Activity (SWA) and Asynchronous aWake (AW) regimes expressed by
thalamo-cortical models are among their benchmarks.Comment: 8 pages, 8 figures, 4 tables, submitted after final publication on
PDP2019 proceedings, corrected final DOI. arXiv admin note: text overlap with
arXiv:1812.04974, arXiv:1804.0344
A scalable multi-core architecture with heterogeneous memory structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs)
Neuromorphic computing systems comprise networks of neurons that use
asynchronous events for both computation and communication. This type of
representation offers several advantages in terms of bandwidth and power
consumption in neuromorphic electronic systems. However, managing the traffic
of asynchronous events in large scale systems is a daunting task, both in terms
of circuit complexity and memory requirements. Here we present a novel routing
methodology that employs both hierarchical and mesh routing strategies and
combines heterogeneous memory structures for minimizing both memory
requirements and latency, while maximizing programming flexibility to support a
wide range of event-based neural network architectures, through parameter
configuration. We validated the proposed scheme in a prototype multi-core
neuromorphic processor chip that employs hybrid analog/digital circuits for
emulating synapse and neuron dynamics together with asynchronous digital
circuits for managing the address-event traffic. We present a theoretical
analysis of the proposed connectivity scheme, describe the methods and circuits
used to implement such scheme, and characterize the prototype chip. Finally, we
demonstrate the use of the neuromorphic processor with a convolutional neural
network for the real-time classification of visual symbols being flashed to a
dynamic vision sensor (DVS) at high speed.Comment: 17 pages, 14 figure
Waveform detection by deep learning reveals multi-area spindles that are selectively modulated by memory load
Sleep is generally considered to be a state of large-scale synchrony across thalamus and neocortex; however, recent work has challenged this idea by reporting isolated sleep rhythms such as slow oscillations and spindles. What is the spatial scale of sleep rhythms? To answer this question, we adapted deep learning algorithms initially developed for detecting earthquakes and gravitational waves in high-noise settings for analysis of neural recordings in sleep. We then studied sleep spindles in non-human primate electrocorticography (ECoG), human electroencephalogram (EEG), and clinical intracranial electroencephalogram (iEEG) recordings in the human. Within each recording type, we find widespread spindles occur much more frequently than previously reported. We then analyzed the spatiotemporal patterns of these large-scale, multi-area spindles and, in the EEG recordings, how spindle patterns change following a visual memory task. Our results reveal a potential role for widespread, multi-area spindles in consolidation of memories in networks widely distributed across primate cortex
Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges
In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices
Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes
There is mounting experimental evidence that brain-state specific neural
mechanisms supported by connectomic architectures serve to combine past and
contextual knowledge with current, incoming flow of evidence (e.g. from sensory
systems). Such mechanisms are distributed across multiple spatial and temporal
scales and require dedicated support at the levels of individual neurons and
synapses. A prominent feature in the neocortex is the structure of large, deep
pyramidal neurons which show a peculiar separation between an apical dendritic
compartment and a basal dentritic/peri-somatic compartment, with distinctive
patterns of incoming connections and brain-state specific activation
mechanisms, namely apical-amplification, -isolation and -drive associated to
the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of
apical mechanisms have been demonstrated in behaving animals. In contrast,
classical models of learning spiking networks are based on single compartment
neurons that miss the description of mechanisms to combine apical and
basal/somatic information. This work aims to provide the computational
community with a two-compartment spiking neuron model which includes features
that are essential for supporting brain-state specific learning and with a
piece-wise linear transfer function (ThetaPlanes) at highest abstraction level
to be used in large scale bio-inspired artificial intelligence systems. A
machine learning algorithm, constrained by a set of fitness functions, selected
the parameters defining neurons expressing the desired apical mechanisms.Comment: 19 pages, 38 figures, pape
- …