13,654 research outputs found

    Learning Mechanisms in Networks of Spiking Neurons

    Get PDF

    Deep Spiking Neural Networks: Study on the MNIST and N-MNIST Data Sets

    Get PDF
    Deep learning, i.e., the use of deep convolutional neural networks (DCNN), is a powerful tool for pattern recognition (image classification) and natural language (speech) processing. Deep convolutional networks use multiple convoltuion layers to learn the input data. They have been used to classify the large dataset Imagenet with an accuracy of 96.6%. In this work deep spiking networks are considered. This is new paradigm for implementing artificial neural networks using mechanisms that incorporate spike-timing dependent plasticity which is a learning algorithm discovered by neuroscientists. Advances in deep learning has opened up multitude of new avenues that once were limited to science fiction. The promise of spiking networks is that they are less computationally intensive and much more energy efficient as the spiking algorithms can be implemented on a neuromorphic chip such as Intel’s LOIHI chip (operates at low power because it runs asynchronously using spikes). Our work is based on the work of Masquelier and Thorpe, and Kheradpisheh et al. In particular a study is done of how such networks classify MNIST image data and N-MNIST spiking data. The networks used in consist of multiple convolution/pooling layers of spiking neurons trained using spike timing dependent plasticity (STDP) and a final classification layer done using a support vector machine (SVM)

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Aspects of learning within networks of spiking neurons

    Get PDF
    Spiking neural networks have, in recent years, become a popular tool for investigating the properties and computational performance of large massively connected networks of neurons. Equally as interesting is the investigation of the potential computational power of individual spiking neurons. An overview is provided of current and relevant research into the Liquid Sate Machine, biologically inspired artificial STDP learning mechanisms and the investigation of aspects of the computational power of artificial, recurrent networks of spiking neurons. First, it is shown that, using simple structures of spiking Leaky Integrate and Fire (LIF) neurons, a network n(P), can be built to perform any program P that can be performed by a general parallel programming language. Next, a form of STDP learning with normalisation is developed, referred to as STDP + N learning. The effects of applying this STDP + N learning within recurrently connected networks of neurons is then investigated. It is shown experimentally that, in very specific circumstances Anti-Hebbian and Hebbian STDP learning may be considered to be approximately equivalent processes. A metric is then developed that can be used to measure the distance between any two spike trains. The metric is then used, along with the STDP + N learning, in an experiment to examine the capacity of a single spiking neuron that receives multiple input spike trains, to simultaneously learn many temporally precise Input/Output spike train associations. The STDP +N learning is further modified for use in recurrent networks of spiking neurons, to give the STDP + NType2 learning methodology. An experiment is devised which demonstrates that the Type 2 method of applying learning to the synapses of a recurrent network — effectively a randomly shifting locality of learning — can enable the network to learn firing patterns that the typical application of learning is unable to learn. The resulting networks could, in theory, be used to create to simple structures discussed in the first chapter of original work.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Multi-layered Spiking Neural Network with Target Timestamp Threshold Adaptation and STDP

    Full text link
    Spiking neural networks (SNNs) are good candidates to produce ultra-energy-efficient hardware. However, the performance of these models is currently behind traditional methods. Introducing multi-layered SNNs is a promising way to reduce this gap. We propose in this paper a new threshold adaptation system which uses a timestamp objective at which neurons should fire. We show that our method leads to state-of-the-art classification rates on the MNIST dataset (98.60%) and the Faces/Motorbikes dataset (99.46%) with an unsupervised SNN followed by a linear SVM. We also investigate the sparsity level of the network by testing different inhibition policies and STDP rules
    • …
    corecore