7 research outputs found

    A Deep Unsupervised Feature Learning Spiking Neural Network with Binarized Classification Layers for the EMNIST Classification

    Get PDF
    End user AI is trained on large server farms with data collected from the users. With ever increasing demand for IoT devices, there is a need for deep learning approaches that can be implemented at the Edge in an energy efficient manner. In this work we approach this using spiking neural networks. The unsupervised learning technique of spike timing dependent plasticity (STDP) and binary activations are used to extract features from spiking input data. Gradient descent (backpropagation) is used only on the output layer to perform training for classification. The accuracies obtained for the balanced EMNIST data set compare favorably with other approaches. The effect of the stochastic gradient descent (SGD) approximations on learning capabilities of our network are also explored

    Deep Spiking Neural Networks: Study on the MNIST and N-MNIST Data Sets

    Get PDF
    Deep learning, i.e., the use of deep convolutional neural networks (DCNN), is a powerful tool for pattern recognition (image classification) and natural language (speech) processing. Deep convolutional networks use multiple convoltuion layers to learn the input data. They have been used to classify the large dataset Imagenet with an accuracy of 96.6%. In this work deep spiking networks are considered. This is new paradigm for implementing artificial neural networks using mechanisms that incorporate spike-timing dependent plasticity which is a learning algorithm discovered by neuroscientists. Advances in deep learning has opened up multitude of new avenues that once were limited to science fiction. The promise of spiking networks is that they are less computationally intensive and much more energy efficient as the spiking algorithms can be implemented on a neuromorphic chip such as Intel’s LOIHI chip (operates at low power because it runs asynchronously using spikes). Our work is based on the work of Masquelier and Thorpe, and Kheradpisheh et al. In particular a study is done of how such networks classify MNIST image data and N-MNIST spiking data. The networks used in consist of multiple convolution/pooling layers of spiking neurons trained using spike timing dependent plasticity (STDP) and a final classification layer done using a support vector machine (SVM)

    Learning Behavior of Memristor-Based Neuromorphic Circuits in the Presence of Radiation

    Get PDF
    In this paper, a feed-forward spiking neural network with memristive synapses is designed to learn a spatio-temporal pattern representing the 25-pixel character ‘B’ by separating correlated and uncorrelated afferents. The network uses spike-timing-dependent plasticity (STDP) learning behavior, which is implemented using biphasic neuron spikes. A TiO2 memristor non-linear drift model is used to simulate synaptic behavior in the neuromorphic circuit. The network uses a many-to-one topology with 25 pre-synaptic neurons (afferent) each connected to a memristive synapse and one post-synaptic neuron. The memristor model is modified to include the experimentally observed effect of state-altering radiation. During the learning process, irradiation of the memristors alters their conductance state, and the effect on circuit learning behavior is determined. Radiation is observed to generally increase the synaptic weight of the memristive devices, making the network connections more conductive and less stable. However, the network appears to relearn the pattern when radiation ceases but does take longer to resolve the correlation and pattern. Network recovery time is proportional to flux, intensity, and duration of the radiation. Further, at lower but continuous radiation exposure, (flux 1x1010 cm−2 s−1 and below), the circuit resolves the pattern successfully for up to 100 s

    Deep Convolutional Spiking Neural Networks for Image Classification

    Get PDF
    Spiking neural networks are biologically plausible counterparts of artificial neural networks. Artificial neural networks are usually trained with stochastic gradient descent (SGD) and spiking neural networks are trained with bioinspired spike timing dependent plasticity (STDP). Spiking networks could potentially help in reducing power usage owing to their binary activations. In this work, we use unsupervised STDP in the feature extraction layers of a neural network with instantaneous neurons to extract meaningful features. The extracted binary feature vectors are then classified using classification layers containing neurons with binary activations. Gradient descent (backpropagation) is used only on the output layer to perform training for classification. Surrogate gradients are proposed to perform backpropagation with binary gradients. The accuracies obtained for MNIST and the balanced EMNIST data set compare favorably with other approaches. The effect of the stochastic gradient descent (SGD) approximations on learning capabilities of our network are also explored. We also studied catastrophic forgetting and its effect on spiking neural networks (SNNs). For the experiments regarding catastrophic forgetting, in the classification sections of the network we use a modified synaptic intelligence that we refer to as cost per synapse metric as a regularizer to immunize the network against catastrophic forgetting in a Single-Incremental-Task scenario (SIT). In catastrophic forgetting experiments, we use MNIST and EMNIST handwritten digits datasets that were divided into five and ten incremental subtasks respectively. We also examine behavior of the spiking neural network and empirically study the effect of various hyperparameters on its learning capabilities using the software tool SPYKEFLOW that we developed. We employ MNIST, EMNIST and NMNIST data sets to produce our results

    Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

    Get PDF
    Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning, that is, when a DNN is trained on a first task and the same DNN is trained on the next task it forgets the first task. This phenomenon of forgetting previous tasks is also referred to as catastrophic forgetting. On the other hand a mammalian brain outperforms DNNs in terms of energy efficiency and the ability to learn sequentially without catastrophically forgetting. Here, we use bio-inspired Spike Timing Dependent Plasticity (STDP) in the feature extraction layers of the network with instantaneous neurons to extract meaningful features. In the classification sections of the network we use a modified synaptic intelligence that we refer to as cost per synapse metric as a regularizer to immunize the network against catastrophic forgetting in a Single-Incremental-Task scenario (SIT). In this study, we use MNIST handwritten digits dataset that was divided into five sub-tasks
    corecore