51 research outputs found
Scalable event-driven modelling architectures for neuromimetic hardware
Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Dedicated hardware may thus be more suitable for executing them. Given that there is no clear consensus on the model of computation in the brain, model flexibility is at least as important a characteristic of neural hardware as is performance acceleration. The SpiNNaker chip is an example of the emerging 'neuromimetic' architecture, a universal platform that specialises the hardware for neural networks but allows flexibility in model choice. It integrates four key attributes: native parallelism, event-driven processing, incoherent memory and incremental reconfiguration, in a system combining an array of general-purpose processors with a configurable asynchronous interconnect. Making such a device usable in practice requires an environment for instantiating neural models on the chip that allows the user to focus on model characteristics rather than on hardware details. The central part of this system is a library of predesigned, 'drop-in' event-driven neural components that specify their specific implementation on SpiNNaker. Three exemplar models: two spiking networks and a multilayer perceptron network, illustrate techniques that provide a basis for the library and demonstrate a reference methodology that can be extended to support third-party library components not only on SpiNNaker but on any configurable neuromimetic platform. Experiments demonstrate the capability of the library model to implement efficient on-chip neural networks, but also reveal important hardware limitations, particularly with respect to communications, that require careful design. The ultimate goal is the creation of a library-based development system that allows neural modellers to work in the high-level environment of their choice, using an automated tool chain to create the appropriate SpiNNaker instantiation. Such a system would enable the use of the hardware to explore abstractions of biological neurodynamics that underpin a functional model of neural computation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Parallel computing for brain simulation
[Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced.
Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain.
Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028
A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems
In this paper we present a methodological framework that meets novel
requirements emerging from upcoming types of accelerated and highly
configurable neuromorphic hardware systems. We describe in detail a device with
45 million programmable and dynamic synapses that is currently under
development, and we sketch the conceptual challenges that arise from taking
this platform into operation. More specifically, we aim at the establishment of
this neuromorphic system as a flexible and neuroscientifically valuable
modeling tool that can be used by non-hardware-experts. We consider various
functional aspects to be crucial for this purpose, and we introduce a
consistent workflow with detailed descriptions of all involved modules that
implement the suggested steps: The integration of the hardware interface into
the simulator-independent model description language PyNN; a fully automated
translation between the PyNN domain and appropriate hardware configurations; an
executable specification of the future neuromorphic system that can be
seamlessly integrated into this biology-to-hardware mapping process as a test
bench for all software layers and possible hardware design modifications; an
evaluation scheme that deploys models from a dedicated benchmark library,
compares the results generated by virtual or prototype hardware devices with
reference software simulations and analyzes the differences. The integration of
these components into one hardware-software workflow provides an ecosystem for
ongoing preparative studies that support the hardware design process and
represents the basis for the maturity of the model-to-hardware mapping
software. The functionality and flexibility of the latter is proven with a
variety of experimental results
Plasticity and Adaptation in Neuromorphic Biohybrid Systems
Neuromorphic systems take inspiration from the principles of biological information processing to form hardware platforms that enable the large-scale implementation of neural networks. The recent years have seen both advances in the theoretical aspects of spiking neural networks for their use in classification and control tasks and a progress in electrophysiological methods that is pushing the frontiers of intelligent neural interfacing and signal processing technologies. At the forefront of these new technologies, artificial and biological neural networks are tightly coupled, offering a novel \u201cbiohybrid\u201d experimental framework for engineers and neurophysiologists. Indeed, biohybrid systems can constitute a new class of neuroprostheses opening important perspectives in the treatment of neurological disorders. Moreover, the use of biologically plausible learning rules allows forming an overall fault-tolerant system of co-developing subsystems. To identify opportunities and challenges in neuromorphic biohybrid systems, we discuss the field from the perspectives of neurobiology, computational neuroscience, and neuromorphic engineering. \ua9 2020 The Author(s
Enhancing brain-computer interfacing through advanced independent component analysis techniques
A Brain-computer interface (BCI) is a direct communication system between a brain
and an external device in which messages or commands sent by an individual do not
pass through the brain’s normal output pathways but is detected through brain signals.
Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head
trauma, spinal injuries and other diseases may cause the patients to lose their muscle
control and become unable to communicate with the outside environment. Currently
no effective cure or treatment has yet been found for these diseases. Therefore using a
BCI system to rebuild the communication pathway becomes a possible alternative
solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI
is becoming a popular system due to EEG’s fine temporal resolution, ease of use,
portability and low set-up cost. However EEG’s susceptibility to noise is a major
issue to develop a robust BCI. Signal processing techniques such as coherent
averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and
extract components of interest. However these methods process the data on the
observed mixture domain which mixes components of interest and noise. Such a
limitation means that extracted EEG signals possibly still contain the noise residue or
coarsely that the removed noise also contains part of EEG signals embedded.
Independent Component Analysis (ICA), a Blind Source Separation (BSS)
technique, is able to extract relevant information within noisy signals and separate the
fundamental sources into the independent components (ICs). The most common
assumption of ICA method is that the source signals are unknown and statistically
independent. Through this assumption, ICA is able to recover the source signals.
Since the ICA concepts appeared in the fields of neural networks and signal
processing in the 1980s, many ICA applications in telecommunications, biomedical
data analysis, feature extraction, speech separation, time-series analysis and data
mining have been reported in the literature. In this thesis several ICA techniques are
proposed to optimize two major issues for BCI applications: reducing the recording
time needed in order to speed up the signal processing and reducing the number of
recording channels whilst improving the final classification performance or at least
with it remaining the same as the current performance. These will make BCI a more
practical prospect for everyday use.
This thesis first defines BCI and the diverse BCI models based on different
control patterns. After the general idea of ICA is introduced along with some
modifications to ICA, several new ICA approaches are proposed. The practical work
in this thesis starts with the preliminary analyses on the Southampton BCI pilot
datasets starting with basic and then advanced signal processing techniques. The
proposed ICA techniques are then presented using a multi-channel event related
potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel
spontaneous activity based BCI. The final ICA approach aims to examine the
possibility of using ICA based on just one or a few channel recordings on an ERP
based BCI.
The novel ICA approaches for BCI systems presented in this thesis show that ICA
is able to accurately and repeatedly extract the relevant information buried within
noisy signals and the signal quality is enhanced so that even a simple classifier can
achieve good classification accuracy. In the ERP based BCI application, after multichannel
ICA the data just applied to eight averages/epochs can achieve 83.9%
classification accuracy whilst the data by coherent averaging can reach only 32.3%
accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA
algorithm can effectively extract discriminatory information from two types of singletrial
EEG data. The classification accuracy is improved by about 25%, on average,
compared to the performance on the unpreprocessed data. The single channel ICA
technique on the ERP based BCI produces much better results than results using the
lowpass filter. Whereas the appropriate number of averages improves the signal to
noise rate of P300 activities which helps to achieve a better classification. These
advantages will lead to a reliable and practical BCI for use outside of the clinical
laboratory
- …