25 research outputs found

    Learning First-to-Spike Policies for Neuromorphic Control Using Policy Gradients

    Full text link
    Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low energy consumption, SNNs are considered to be important candidates as co-processors to be implemented in mobile devices. In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons. A policy gradient-based algorithm is derived considering a Generalized Linear Model (GLM) for spiking neurons. Experimental results demonstrate the capability of online trained SNNs as stochastic policies to gracefully trade energy consumption, as measured by the number of spikes, and control performance. Significant gains are shown as compared to the standard approach of converting an offline trained ANN into an SNN.Comment: Submitted for conference publicatio

    CDNA-SNN: A New Spiking Neural Network for Pattern Classification using Neuronal Assemblies

    Get PDF
    Spiking neural networks (SNNs) mimic their biological counterparts more closely than their predecessors and are considered the third generation of artificial neural networks. It has been proven that networks of spiking neurons have a higher computational capacity and lower power requirements than sigmoidal neural networks. This paper introduces a new type of spiking neural network that draws inspiration and incorporates concepts from neuronal assemblies in the human brain. The proposed network, termed as CDNA-SNN, assigns each neuron learnable values known as Class-Dependent Neuronal Activations (CDNAs) which indicate the neuron’s average relative spiking activity in response to samples from different classes. A new learning algorithm that categorizes the neurons into different class assemblies based on their CDNAs is also presented. These neuronal assemblies are trained via a novel training method based on Spike-Timing Dependent Plasticity (STDP) to have high activity for their associated class and low firing rate for other classes. Also, using CDNAs, a new type of STDP that controls the amount of plasticity based on the assemblies of pre- and post-synaptic neurons is proposed. The performance of CDNA-SNN is evaluated on five datasets from the UCI machine learning repository, as well as MNIST and Fashion MNIST, using nested cross-validation for hyperparameter optimization. Our results show that CDNA-SNN significantly outperforms SWAT (p<0.0005) and SpikeProp (p<0.05) on 3/5 and SRESN (p<0.05) on 2/5 UCI datasets while using the significantly lower number of trainable parameters. Furthermore, compared to other supervised, fully connected SNNs, the proposed SNN reaches the best performance for Fashion MNIST and comparable performance for MNIST and N-MNIST, also utilizing much less (1-35%) parameters

    Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection

    Full text link
    Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications. As we try to solve more advanced problems, increasing demands for computing and power resources has become inevitable. Spiking neural networks (SNNs) have attracted widespread interest as the third-generation of neural networks due to their event-driven and low-powered nature. SNNs, however, are difficult to train, mainly owing to their complex dynamics of neurons and non-differentiable spike operations. Furthermore, their applications have been limited to relatively simple tasks such as image classification. In this study, we investigate the performance degradation of SNNs in a more challenging regression problem (i.e., object detection). Through our in-depth analysis, we introduce two novel methods: channel-wise normalization and signed neuron with imbalanced threshold, both of which provide fast and accurate information transmission for deep SNNs. Consequently, we present a first spiked-based object detection model, called Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic chip consumes approximately 280 times less energy than Tiny YOLO and converges 2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202

    SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes

    Full text link
    Event-based neuromorphic systems promise to reduce the energy consumption of deep learning tasks by replacing expensive floating point operations on dense matrices by low power sparse and asynchronous operations on spike events. While these systems can be trained increasingly well using approximations of the back-propagation algorithm, these implementations usually require high precision errors for training and are therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 dataset than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities
    corecore