31 research outputs found

    Perceptual Modeling of Tinnitus Pitch and Loudness

    Get PDF
    Tinnitus is the phantom perception of sound, experienced by 10-15% of the global population. Computational models have been used to investigate the mechanisms underlying the generation of tinnitus- related activity. However, existing computational models have rarely benchmarked the modelled perception of a phantom sound against recorded data relating to a person’s perception of tinnitus characteristics; such as pitch or loudness. This paper details the development of two perceptual models of tinnitus. The models are validated using empirical data from people with tinnitus and the models' performance is compared with existing perceptual models of tinnitus pitch. The first model extends existing perceptual models of tinnitus, while the second model utilises an entirely novel approach to modelling tinnitus perception using a Linear Mixed Effects (LME) model. The LME model is also used to model the perceived loudness of the phantom sound which has not been considered in previous models. The LME model creates an accurate model of tinnitus pitch and loudness and shows that both tinnitus-related activity and individual perception of sound are factors in the perception of the phantom sound that characterizes tinnitus

    Assembly-based STDP:A New Learning Rule for Spiking Neural Networks Inspired by Biological Assemblies

    Get PDF
    Spiking Neural Networks (SNNs), An alternative to sigmoidal neural networks, include time into their operations using discrete signals called spikes. Employing spikes enables SNNs to mimic any feedforward sigmoidal neural network with lower power consumption. Recently a new type of SNN has been introduced for classification problems, known as Degree of Belonging SNN (DoB-SNN). DoB-SNN is a two-layer spiking neural network that shows significant potential as an alternative SNN architecture and learning algorithm. This paper introduces a new variant of Spike-Timing Dependent Plasticity (STDP), which is based on the assembly of neurons and expands the DoB-SNN's training algorithm for multilayer architectures. The new learning rule, known as assembly-based STDP, employs trained DoBs in each layer to train the next layer and build strong connections between neurons from the same assembly while creating inhibitory connections between neurons from different assemblies in two consecutive layers. The performance of the multilayer DoB-SNN is evaluated on five datasets from the UCI machine learning repository. Detailed comparisons on these datasets with other supervised learning algorithms show that the multilayer DoB-SNN can achieve better performance on 4/5 datasets and comparable performance on 5th when compared to multilayer algorithms that employ considerably more trainable parameters

    CDNA-SNN: A New Spiking Neural Network for Pattern Classification using Neuronal Assemblies

    Get PDF
    Spiking neural networks (SNNs) mimic their biological counterparts more closely than their predecessors and are considered the third generation of artificial neural networks. It has been proven that networks of spiking neurons have a higher computational capacity and lower power requirements than sigmoidal neural networks. This paper introduces a new type of spiking neural network that draws inspiration and incorporates concepts from neuronal assemblies in the human brain. The proposed network, termed as CDNA-SNN, assigns each neuron learnable values known as Class-Dependent Neuronal Activations (CDNAs) which indicate the neuron’s average relative spiking activity in response to samples from different classes. A new learning algorithm that categorizes the neurons into different class assemblies based on their CDNAs is also presented. These neuronal assemblies are trained via a novel training method based on Spike-Timing Dependent Plasticity (STDP) to have high activity for their associated class and low firing rate for other classes. Also, using CDNAs, a new type of STDP that controls the amount of plasticity based on the assemblies of pre- and post-synaptic neurons is proposed. The performance of CDNA-SNN is evaluated on five datasets from the UCI machine learning repository, as well as MNIST and Fashion MNIST, using nested cross-validation for hyperparameter optimization. Our results show that CDNA-SNN significantly outperforms SWAT (p<0.0005) and SpikeProp (p<0.05) on 3/5 and SRESN (p<0.05) on 2/5 UCI datasets while using the significantly lower number of trainable parameters. Furthermore, compared to other supervised, fully connected SNNs, the proposed SNN reaches the best performance for Fashion MNIST and comparable performance for MNIST and N-MNIST, also utilizing much less (1-35%) parameters
    corecore