93,374 research outputs found

    Assembly-based STDP:A New Learning Rule for Spiking Neural Networks Inspired by Biological Assemblies

    Get PDF
    Spiking Neural Networks (SNNs), An alternative to sigmoidal neural networks, include time into their operations using discrete signals called spikes. Employing spikes enables SNNs to mimic any feedforward sigmoidal neural network with lower power consumption. Recently a new type of SNN has been introduced for classification problems, known as Degree of Belonging SNN (DoB-SNN). DoB-SNN is a two-layer spiking neural network that shows significant potential as an alternative SNN architecture and learning algorithm. This paper introduces a new variant of Spike-Timing Dependent Plasticity (STDP), which is based on the assembly of neurons and expands the DoB-SNN's training algorithm for multilayer architectures. The new learning rule, known as assembly-based STDP, employs trained DoBs in each layer to train the next layer and build strong connections between neurons from the same assembly while creating inhibitory connections between neurons from different assemblies in two consecutive layers. The performance of the multilayer DoB-SNN is evaluated on five datasets from the UCI machine learning repository. Detailed comparisons on these datasets with other supervised learning algorithms show that the multilayer DoB-SNN can achieve better performance on 4/5 datasets and comparable performance on 5th when compared to multilayer algorithms that employ considerably more trainable parameters

    CDNA-SNN: A New Spiking Neural Network for Pattern Classification using Neuronal Assemblies

    Get PDF
    Spiking neural networks (SNNs) mimic their biological counterparts more closely than their predecessors and are considered the third generation of artificial neural networks. It has been proven that networks of spiking neurons have a higher computational capacity and lower power requirements than sigmoidal neural networks. This paper introduces a new type of spiking neural network that draws inspiration and incorporates concepts from neuronal assemblies in the human brain. The proposed network, termed as CDNA-SNN, assigns each neuron learnable values known as Class-Dependent Neuronal Activations (CDNAs) which indicate the neuron’s average relative spiking activity in response to samples from different classes. A new learning algorithm that categorizes the neurons into different class assemblies based on their CDNAs is also presented. These neuronal assemblies are trained via a novel training method based on Spike-Timing Dependent Plasticity (STDP) to have high activity for their associated class and low firing rate for other classes. Also, using CDNAs, a new type of STDP that controls the amount of plasticity based on the assemblies of pre- and post-synaptic neurons is proposed. The performance of CDNA-SNN is evaluated on five datasets from the UCI machine learning repository, as well as MNIST and Fashion MNIST, using nested cross-validation for hyperparameter optimization. Our results show that CDNA-SNN significantly outperforms SWAT (p<0.0005) and SpikeProp (p<0.05) on 3/5 and SRESN (p<0.05) on 2/5 UCI datasets while using the significantly lower number of trainable parameters. Furthermore, compared to other supervised, fully connected SNNs, the proposed SNN reaches the best performance for Fashion MNIST and comparable performance for MNIST and N-MNIST, also utilizing much less (1-35%) parameters

    Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

    Full text link
    In recent years the field of neuromorphic low-power systems that consume orders of magnitude less power gained significant momentum. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, efficient processing of temporal sequences or variable length-inputs remain difficult. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights and neural activities to 16 levels, and to limit fan-in to 64 inputs. We find that short synaptic delays are sufficient to implement the dynamical (temporal) aspect of the RNN in the question classification task. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ~17 uW

    Echo State Queueing Network: a new reservoir computing learning tool

    Full text link
    In the last decade, a new computational paradigm was introduced in the field of Machine Learning, under the name of Reservoir Computing (RC). RC models are neural networks which a recurrent part (the reservoir) that does not participate in the learning process, and the rest of the system where no recurrence (no neural circuit) occurs. This approach has grown rapidly due to its success in solving learning tasks and other computational applications. Some success was also observed with another recently proposed neural network designed using Queueing Theory, the Random Neural Network (RandNN). Both approaches have good properties and identified drawbacks. In this paper, we propose a new RC model called Echo State Queueing Network (ESQN), where we use ideas coming from RandNNs for the design of the reservoir. ESQNs consist in ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The paper positions ESQNs in the global Machine Learning area, and provides examples of their use and performances. We show on largely used benchmarks that ESQNs are very accurate tools, and we illustrate how they compare with standard ESNs.Comment: Proceedings of the 10th IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, USA, 201

    A parallel Fortran framework for neural networks and deep learning

    Full text link
    This paper describes neural-fortran, a parallel Fortran framework for neural networks and deep learning. It features a simple interface to construct feed-forward neural networks of arbitrary structure and size, several activation functions, and stochastic gradient descent as the default optimization algorithm. Neural-fortran also leverages the Fortran 2018 standard collective subroutines to achieve data-based parallelism on shared- or distributed-memory machines. First, I describe the implementation of neural networks with Fortran derived types, whole-array arithmetic, and collective sum and broadcast operations to achieve parallelism. Second, I demonstrate the use of neural-fortran in an example of recognizing hand-written digits from images. Finally, I evaluate the computational performance in both serial and parallel modes. Ease of use and computational performance are similar to an existing popular machine learning framework, making neural-fortran a viable candidate for further development and use in production.Comment: Submitted to ACM SIGPLAN Fortran Forum. Reviewed by Arjen Markus and Izaak Beekma
    • …
    corecore