21 research outputs found

    Optimizing the energy consumption of spiking neural networks for neuromorphic applications

    Full text link
    In the last few years, spiking neural networks have been demonstrated to perform on par with regular convolutional neural networks. Several works have proposed methods to convert a pre-trained CNN to a Spiking CNN without a significant sacrifice of performance. We demonstrate first that quantization-aware training of CNNs leads to better accuracy in SNNs. One of the benefits of converting CNNs to spiking CNNs is to leverage the sparse computation of SNNs and consequently perform equivalent computation at a lower energy consumption. Here we propose an efficient optimization strategy to train spiking networks at lower energy consumption, while maintaining similar accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets

    Sentiment analysis on movie reviews by recurrent neural networks and long short-term memory

    Get PDF
    Sentiment analysis has become important tool that can analyse review on any product or service that can be reviewed. Same goes to movie, all the audient are freely to make their own reviews on the movie that they watch and the reviews can be positive or negative based on audient satisfactions. Automated sentiment analysis is very important to make sure the analysis produce an accurate result and in faster time. By using the deep learning as the based to create the automated sentiment analysis it will be the great decision because of the deep learning structure that have multilevel of layer that can have sensitive process to classify the data. Upgrading the sentiment analysis using Recurrent Neural Networks (RNNs) and addition of Long Short-term Memory (LSTM) and also some modification on the number of layer with the mathematical calculation can improve the analysis accuracy. The dataset of the movie reviews will be collected on IMDB movie reviews database

    Temporal Convolution in Spiking Neural Networks: a Bio-mimetic Paradigm

    Get PDF
    Abstract Recent spectacular advances in Artificial Intelligence (AI), in large, be attributed to developments in Deep Learning (DL). In essence, DL is not a new concept. In many respects, DL shares characteristics of “traditional” types of Neural Network (NN). The main distinguishing feature is that it uses many more layers in order to learn increasingly complex features. Each layer convolutes into the previous by simplifying and applying a function upon a subsection of that layer. Deep Learning’s fantastic success can be attributed to dedicated researchers experimenting with many different groundbreaking techniques, but also some of its triumph can also be attributed to fortune. It was the right technique at the right time. To function effectively, DL mainly requires two things: (a) vast amounts of training data and (b) a very specific type of computational capacity. These two respective requirements have been amply met with the growth of the internet and the rapid development of GPUs. As such DL is an almost perfect fit for today’s technologies. However, DL is only a very rough approximation of how the brain works. More recently, Spiking Neural Networks (SNNs) have tried to simulate biological phenomena in a more realistic way. In SNNs information is transmitted as discreet spikes of data rather than a continuous weight or a differentiable activation function. In practical terms this means that far more nuanced interactions can occur between neurons and that the network can run far more efficiently (e.g. in terms of calculations needed and therefore overall power requirements). Nevertheless, the big problem with SNNs is that unlike DL it does not “fit” well with existing technologies. Worst still is that no one has yet come up with definitive way to make SNNs function at a “deep” level. The difficulty is that in essence "deep" and "spiking" refer to fundamentally different characteristics of a neural network: "spiking" focuses on the activation of individual neurons, whereas "deep" concerns itself to the network architecture itself [1]. However, these two methods are in fact not contradictory, but have so far been developed in isolation from each other due to the prevailing technology driving each technique and the fundamental conceptual distance between each of the two biological paradigms. If advances in AI are to continue at the present rate that new technologies are going to be developed and the contradictory aspects of DL and SNN are going to have to be reconciled. Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways [2]-one of the most exciting being the creation of a specific hierarchical learning paradigm in Recurrent SNN (RSNNs) called e-prop [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model then the union between DL and RSNN can be made in a more harmonious manner. The missing piece to the jigsaw, in fact, is the glial cell and the unacknowledged function it plays in neural processing. In this context, this paper examines how DL and SNN can be combined, and how glial dynamics cannot only address outstanding issues with the existing individual paradigms - for example the “weight transport” problem - but also act as the “glue” – e.g. pun intended - between these two paradigms. This idea has direct parallel with the idea of convolution in DL but has the added dimension of time. It is important not only where events happen but also when events occur in this new paradigm. The synergy between these two powerful paradigms give hints at the direction and potential of what could be an important part of the next wave of development in AI

    Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model

    Full text link
    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3%/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competetive accuracy with respect to state-of-the-art non-spiking neural models.Comment: 13 figures (includes supplementary information
    corecore