7 research outputs found

    The Enhancement of Evolving Spiking Neural Network with Firefly Algorithm

    Get PDF
    This study presents the integration between Evolving Spiking Neural Network (ESNN) and Firefly Algorithm (FA) for parameter optimization of ESNN model. Since ESNN lacks the ability to automatically select the optimum parameters, Firefly Algorithm (FA), as one of nature inspired metaheuristic algorithms is used as a new parameter optimizer for ESNN. The proposed method, ESNN-FA is used to determine the optimum value of ESNN parameter which is modulation factor (Mod), similarity factor (Sim) and threshold factor (C). Five standard datasets from UCI machine learning are used to measure the effectiveness of the proposed work. The classification results obtained shown an increase in accuracy than standard ESNN for all dataset except for iris dataset. Classification accuracy for iris dataset is 84% which less than standard ESNN with 89.33%. The rest of datasets achieved higher classification accuracy than standard ESNN which for breast cancer with 92.12% than 66.18%, diabetes with 68.25% than 38.46%, heart with 78.15% than 66.3% and wine with 78.66% than 44.45%

    Multi-modal association learning using spike-timing dependent plasticity (STDP)

    Get PDF
    We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio

    Parameter optimization of evolving spiking neural networks using improved firefly algorithm for classification tasks

    Get PDF
    Evolving Spiking Neural Network (ESNN) is the third generation of artificial neural network that has been widely used in numerous studies in recent years. However, there are issues of ESSN that need to be improved; one of which is its parameters namely the modulation factor (Mod), similarity factor (Sim) and threshold factor (C) that have to be manually tuned for optimal values that are suitable for any particular problem. The objective of the proposed work is to automatically determine the optimum values of the ESNN parameters for various datasets by integrating the Firefly Algorithm (FA) optimizer into the ESNN training phase and adaptively searching for the best parameter values. In this study, FA has been modified and improved, and was applied to improve the accuracy of ESNN structure and rates of classification accuracy. Five benchmark datasets from University of California, Irvine (UCI) Machine Learning Repository, have been used to measure the effectiveness of the integration model. Performance analysis of the proposed work was conducted by calculating classification accuracy, and compared with other parameter optimisation methods. The results from the experimentation have proven that the proposed algorithms have attained the optimal parameters values for ESNN

    Parameter optimization of evolving spiking neural network with dynamic population particle swarm optimization

    Get PDF
    Evolving Spiking Neural Network (ESNN) is widely used in classification problem. However, ESNN like any other neural networks is incapable to find its own parameter optimum values, which are crucial for classification accuracy. Thus, in this study, ESNN is integrated with an improved Particle Swarm Optimization (PSO) known as Dynamic Population Particle Swarm Optimization (DPPSO) to optimize the ESNN parameters: the modulation factor (Mod), similarity factor (Sim) and threshold factor (C). To find the optimum ESNN parameter value, DPPSO uses a dynamic population that removes the lowest particle value in every pre-defined iteration. The integration of ESNN-DPPSO facilitates the ESNN parameter optimization searching during the training stage. The performance analysis is measured by classification accuracy and is compared with the existing method. Five datasets gained from University of California Irvine (UCI) Machine Learning Repository are used for this study. The experimental result presents better accuracy compared to the existing technique and thus improves the ESNN method in optimising its parameter values

    Parameter optimization of evolving spiking neural network with dynamic population particle swarm optimization

    Get PDF
    Evolving Spiking Neural Network (ESNN) is widely used in classification problem. However, ESNN like any other neural networks is incapable to find its own parameter optimum values, which are crucial for classification accuracy. Thus, in this study, ESNN is integrated with an improved Particle Swarm Optimization (PSO) known as Dynamic Population Particle Swarm Optimization (DPPSO) to optimize the ESNN parameters: the modulation factor (Mod), similarity factor (Sim) and threshold factor (C). To find the optimum ESNN parameter value, DPPSO uses a dynamic population that removes the lowest particle value in every pre-defined iteration. The integration of ESNN-DPPSO facilitates the ESNN parameter optimization searching during the training stage. The performance analysis is measured by classification accuracy and is compared with the existing method. Five datasets gained from University of California Irvine (UCI) Machine Learning Repository are used for this study. The experimental result presents better accuracy compared to the existing technique and thus improves the ESNN method in optimising its parameter values
    corecore