8 research outputs found

    Improved storage capacity of hebbian learning attractor neural network with bump formations

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/11840817_25Proceedings of 16th International Conference on Artificial Neural Networks, Athens, Greece, September 10-14, 2006, Part IRecently, bump formations in attractor neural networks with distance dependent connectivities has become of increasing interest for investigation in the field of biological and computational neuroscience. Although the distance dependent connectivity is common in biological networks, a common fault of these network is the sharp drop of the number of patterns p that can remembered, when the activity changes from global to bump-like, than effectively makes these networks low effective. In this paper we represent a bump-based recursive network specially designed in order to increase its capacity, which is comparable with that of randomly connected sparse network. To this aim, we have tested a selection of 700 natural images on a network with N = 64K neurons with connectivity per neuron C. We have shown that the capacity of the network is of order of C, that is in accordance with the capacity of highly diluted network. Preserving the number of connections per neuron, a non-trivial behavior with the radius of the connectivity has been observed. Our results show that the decrement of the capacity of the bumpy network can be avoided.The authors acknowledge the financial support from the Spanish Grants DGI.M. CyT. FIS2005-1729, Plan de Promoción de la Investigación UNED and TIN 2004–07676-G01-01.We also thank David Dominguez for the fruitful discussion of the manuscript

    The Enhancement of Evolving Spiking Neural Network with Firefly Algorithm

    Get PDF
    This study presents the integration between Evolving Spiking Neural Network (ESNN) and Firefly Algorithm (FA) for parameter optimization of ESNN model. Since ESNN lacks the ability to automatically select the optimum parameters, Firefly Algorithm (FA), as one of nature inspired metaheuristic algorithms is used as a new parameter optimizer for ESNN. The proposed method, ESNN-FA is used to determine the optimum value of ESNN parameter which is modulation factor (Mod), similarity factor (Sim) and threshold factor (C). Five standard datasets from UCI machine learning are used to measure the effectiveness of the proposed work. The classification results obtained shown an increase in accuracy than standard ESNN for all dataset except for iris dataset. Classification accuracy for iris dataset is 84% which less than standard ESNN with 89.33%. The rest of datasets achieved higher classification accuracy than standard ESNN which for breast cancer with 92.12% than 66.18%, diabetes with 68.25% than 38.46%, heart with 78.15% than 66.3% and wine with 78.66% than 44.45%

    Parameter optimization of evolving spiking neural networks using improved firefly algorithm for classification tasks

    Get PDF
    Evolving Spiking Neural Network (ESNN) is the third generation of artificial neural network that has been widely used in numerous studies in recent years. However, there are issues of ESSN that need to be improved; one of which is its parameters namely the modulation factor (Mod), similarity factor (Sim) and threshold factor (C) that have to be manually tuned for optimal values that are suitable for any particular problem. The objective of the proposed work is to automatically determine the optimum values of the ESNN parameters for various datasets by integrating the Firefly Algorithm (FA) optimizer into the ESNN training phase and adaptively searching for the best parameter values. In this study, FA has been modified and improved, and was applied to improve the accuracy of ESNN structure and rates of classification accuracy. Five benchmark datasets from University of California, Irvine (UCI) Machine Learning Repository, have been used to measure the effectiveness of the integration model. Performance analysis of the proposed work was conducted by calculating classification accuracy, and compared with other parameter optimisation methods. The results from the experimentation have proven that the proposed algorithms have attained the optimal parameters values for ESNN

    Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework

    Full text link
    The Path-Dependent Neural Jump ODE (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations. In particular, the method learns optimal forecasts given irregularly sampled time series of incomplete past observations. So far the process itself and the coordinate-wise observation times were assumed to be independent and observations were assumed to be noiseless. In this work we discuss two extensions to lift these restrictions and provide theoretical guarantees as well as empirical examples for them
    corecore