21 research outputs found

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; GRC2014/049Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    A new spiking convolutional recurrent neural network (SCRNN) with applications to event-based hand gesture recognition

    Get PDF
    The combination of neuromorphic visual sensors and spiking neural network offers a high efficient bio-inspired solution to real-world applications. However, processing event- based sequences remains challenging because of the nature of their asynchronism and sparsity behavior. In this paper, a novel spiking convolutional recurrent neural network (SCRNN) architecture that takes advantage of both convolution operation and recurrent connectivity to maintain the spatial and temporal relations from event-based sequence data are presented. The use of recurrent architecture enables the network to have a sampling window with an arbitrary length, allowing the network to exploit temporal correlations between event collections. Rather than standard ANN to SNN conversion techniques, the network utilizes a supervised Spike Layer Error Reassignment (SLAYER) training mechanism that allows the network to adapt to neuromorphic (event-based) data directly. The network structure is validated on the DVS gesture dataset and achieves a 10 class gesture recognition accuracy of 96.59% and an 11 class gesture recognition accuracy of 90.28%

    Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario

    Full text link
    Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I&F) models are often adopted, with the simple Leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer Spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system's performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available

    Cognitive and Brain-inspired Processing Using Parallel Algorithms and Heterogeneous Chip Multiprocessor Architectures

    Get PDF
    This thesis explores how some neuromorphic engineering approaches can be used to speed up computations and reduce power consumption using neuromorphic hardware systems. These hardware designs are not well-suited to conventional algorithms, so new approaches must be used to take advantage of the parallel nature of these architectures. Background regarding probabilistic graphical models is presented along with brain-inspired ways to perform inference in Bayesian networks. A spiking neuron implementation is developed on two general-purpose parallel neuromorphic hardware devices, the SpiNNaker and the Parallella. Scalability results are shown along with speed improvements as compared to using mainstream processors on a desktop computer. General vector-matrix multiplication computations at various levels of precision are also explored using IBM's TrueNorth Neurosynaptic System. The TrueNorth contains highly-configurable hardware neurons and axons connected via crossbar arrays and consumes very little power but is less flexible than a more general-purpose neuromorphic system such as the SpiNNaker. Nevertheless, techniques described here enable useful computations to be performed utilizing such crossbar arrays with spiking neurons including computing word similarities using trained word vector embeddings. Another technique describes how to perform computations using only one column of the crossbar array at a time despite the fact that incoming spikes normally affect all columns of the array. A way to perform cognitive audio-visual beamforming is presented. Using two systems, each containing a spherical microphone array, sounds are localized using spherical harmonic beamforming. Combining the microphone arrays with 360 degree cameras provides an opportunity to overlay the sound localization with the visual data and create a combined audio-visual salience map. Cognitive computations can be performed on the audio signals to localize specific sounds while ignoring others based on their spectral characteristics. Finally, an ARM Cortex M0 processor design is shown that will be used to bootstrap and coordinate other processing units on a chip developed in the lab for the DARPA Unconventional Processing of Signals for Intelligent Data Exploitation (UPSIDE) program. This design includes a bootloader which provides full programmability each time the chip is booted, and the processor interfaces with other hardware modules to access the Networks-on-Chip and main memory

    Simple and complex spiking neurons : perspectives and analysis in a simple STDP scenario

    Get PDF
    Spiking neural networks (SNNs) are largely inspired by biology and neuroscience, and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I\&F) models are often adopted as considered more suitable, with the simple Leaky I\&F (LIF) being the most used. The reason for adopting such models is their efficiency or biological plausibility. Nevertheless, rigorous justification for the adoption of LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers a variety of neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I\&F neuron models, namely the LIF, the Quadratic I\&F (QIF) and the Exponential I\&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the performance of the whole system. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available
    corecore