75 research outputs found

    SuperSpike: Supervised learning in multi-layer spiking neural networks

    Full text link
    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns

    Configuring spiking neural network training algorithms

    Get PDF
    Spiking neural networks, based on biologically-plausible neurons with temporal information coding, are provably more powerful than widely used artificial neural networks based on sigmoid neurons (ANNs). However, training them is more challenging than training ANNs. Several methods have been proposed in the literature, each with its limitations: SpikeProp, NSEBP, ReSuMe, etc. And setting numerous parameters of spiking networks to obtain good accuracy has been largely ad hoc. In this work, we used automated algorithm configuration tools to determine optimal combinations of parameters for ANNs, artificial neural networks with components simulating glia cells (astrocytes), and for spiking neural networks with SpikeProp learning algorithm. This allowed us to achieve better accuracy on standard datasets (Iris and Wisconsin Breast Cancer), and showed that even after optimization augmenting an artificial neural network with glia results in improved performance. Guided by the experimental results, we have developed methods for determining values of several parameters of spiking neural networks, in particular weight and output ranges. These methods have been incorporated into a SpikeProp implementation

    Spike-Based Classification of UCI Datasets with Multi-Layer Resume-Like Tempotron

    Get PDF
    Spiking neurons are a class of neuron models that represent information in timed sequences called ``spikes.\u27\u27 Though predominantly used in neuro-scientific investigations, spiking neural networks (SNN) can be applied to machine learning problems such as classification and regression. SNN are computationally more powerful per neuron than traditional neural networks. Though training time is slow on general purpose computers, spike-based hardware implementations are faster and have shown capability for ultra-low power consumption. Additionally, various SNN training algorithms have achieved comparable performance with the State of the Art on the Fisher Iris dataset. Our main contribution is a software implementation of the multilayer ReSuMe algorithm using the Tempotron principle. The XOR problem is solved in only 13.73 epochs on average. However, training time on four different UCI datasets is slow, and, although decent performance is seen, in most respects the accuracy of our SNN underperforms compared to other SNN, SVM, and ANN experiments. Additionally, our results on the UCI dataset are only preliminary, necessitating further tuning

    A supervised learning algorithm for learning precise timing of multiple spikes in multilayer spiking neural networks

    Get PDF
    There is a biological evidence to prove information is coded through precise timing of spikes in the brain. However, training a population of spiking neurons in a multilayer network to fire at multiple precise times remains a challenging task. Delay learning and the effect of a delay on weight learning in a spiking neural network (SNN) have not been investigated thoroughly. This paper proposes a novel biologically plausible supervised learning algorithm for learning precisely timed multiple spikes in a multilayer SNNs. Based on the spike-timing-dependent plasticity learning rule, the proposed learning method trains an SNN through the synergy between weight and delay learning. The weights of the hidden and output neurons are adjusted in parallel. The proposed learning method captures the contribution of synaptic delays to the learning of synaptic weights. Interaction between different layers of the network is realized through biofeedback signals sent by the output neurons. The trained SNN is used for the classification of spatiotemporal input patterns. The proposed learning method also trains the spiking network not to fire spikes at undesired times which contribute to misclassification. Experimental evaluation on benchmark data sets from the UCI machine learning repository shows that the proposed method has comparable results with classical rate-based methods such as deep belief network and the autoencoder models. Moreover, the proposed method can achieve higher classification accuracies than single layer and a similar multilayer SNN

    Training multi-layer spiking neural networks with plastic synaptic weights and delays

    Get PDF
    Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed

    A review of learning in biologically plausible spiking neural networks

    Get PDF
    Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed

    Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking

    Full text link
    The role of axonal synaptic delays in the efficacy and performance of artificial neural networks has been largely unexplored. In step-based analog-valued neural network models (ANNs), the concept is almost absent. In their spiking neuroscience-inspired counterparts, there is hardly a systematic account of their effects on model performance in terms of accuracy and number of synaptic operations.This paper proposes a methodology for accounting for axonal delays in the training loop of deep Spiking Neural Networks (SNNs), intending to efficiently solve machine learning tasks on data with rich temporal dependencies. We then conduct an empirical study of the effects of axonal delays on model performance during inference for the Adding task, a benchmark for sequential regression, and for the Spiking Heidelberg Digits dataset (SHD), commonly used for evaluating event-driven models. Quantitative results on the SHD show that SNNs incorporating axonal delays instead of explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy while needing less than half trainable synapses. Additionally, we estimate the required memory in terms of total parameters and energy consumption of accomodating such delay-trained models on a modern neuromorphic accelerator. These estimations are based on the number of synaptic operations and the reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a reduced parameterization, which incorporates axonal delays, leads to approximately 90% energy and memory reduction in digital hardware implementations for a similar performance in the aforementioned task

    Spiking Neural Networks

    Get PDF

    A Delay Learning Algorithm Based on Spike Train Kernels for Spiking Neurons

    Get PDF
    Neuroscience research confirms that the synaptic delays are not constant, but can be modulated. This paper proposes a supervised delay learning algorithm for spiking neurons with temporal encoding, in which both the weight and delay of a synaptic connection can be adjusted to enhance the learning performance. The proposed algorithm firstly defines spike train kernels to transform discrete spike trains during the learning phase into continuous analog signals so that common mathematical operations can be performed on them, and then deduces the supervised learning rules of synaptic weights and delays by gradient descent method. The proposed algorithm is successfully applied to various spike train learning tasks, and the effects of parameters of synaptic delays are analyzed in detail. Experimental results show that the network with dynamic delays achieves higher learning accuracy and less learning epochs than the network with static delays. The delay learning algorithm is further validated on a practical example of an image classification problem. The results again show that it can achieve a good classification performance with a proper receptive field. Therefore, the synaptic delay learning is significant for practical applications and theoretical researches of spiking neural networks
    corecore