77 research outputs found

    String pattern recognition using evolving spiking neural networks and quantum inspired particle swarm optimization

    Get PDF
    This paper proposes a novel method for string pattern recognition using an Evolving Spiking Neural Network (ESNN) with Quantum-inspired Particle Swarm Optimization (QiPSO). This study reveals an interesting concept of QiPSO by representing information as binary structures. The mechanism optimizes the ESNN parameters and relevant features using the wrapper approach simultaneously. The N-gram kernel is used to map Reuters string datasets into high dimensional feature matrix which acts as an input to the proposed method. The results show promising string classification results as well as satisfactory QiPSO performance in obtaining the best combination of ESNN parameters and in identifying the most relevant features

    Quantum-Inspired Particle Swarm Optimization for Feature Selection and Parameter Optimization in Evolving Spiking Neural Networks for Classification Tasks

    Get PDF
    Introduction: Particle Swarm Optimization (PSO) was introduced in 1995 by Russell Eberhart and James Kennedy (Eberhart & Kennedy, 1995). PSO is a biologically-inspired technique based around the study of collective behaviour in decentralized and self-organized animal society systems. The systems are typically made up from a population of candidates (particles) interacting with one another within their environment (swarm) to solve a given problem. Because of its efficiency and simplicity, PSO has been successfully applied as an optimizer in many applications such as function optimization, artificial neural network training, fuzzy system control. However, despite recent research and development, there is an opportunity to find the most effective methods for parameter optimization and feature selection tasks. This chapter deals with the problem of feature (variable) and parameter optimization for neural network models, utilising a proposed Quantum–inspired PSO (QiPSO) method. In this method the features of the model are represented probabilistically as a quantum bit (qubit) vector and the model parameter values as real numbers. The principles of quantum superposition and quantum probability are used to accelerate the search for an optimal set of features, that combined through co-evolution with a set of optimised parameter values, will result in a more accurate computational neural network model. The method has been applied to the problem of feature and parameter optimization in Evolving Spiking Neural Network (ESNN) for classification. A swarm of particles is used to find the most accurate classification model for a given classification task. The QiPSO will be integrated within ESNN where features and parameters are simultaneously and more efficiently optimized. A hybrid particle structure is required for the qubit and real number data types. In addition, an improved search strategy has been introduced to find the most relevant and eliminate the irrelevant features on a synthetic dataset. The method is tested on a benchmark classification problem. The proposed method results in the design of faster and more accurate neural network classification models than the ones optimised through the use of standard evolutionary optimization algorithms. This chapter is organized as follows. Section 2 introduces PSO with quantum information principles and an improved feature search strategy used later in the developed method. Section 3 is an overview of ESNN, while Section 4 gives details of the integrated structure and the experimental results. Finally, Section 5 concludes this chapter

    Parameter optimization of evolving spiking neural networks using improved firefly algorithm for classification tasks

    Get PDF
    Evolving Spiking Neural Network (ESNN) is the third generation of artificial neural network that has been widely used in numerous studies in recent years. However, there are issues of ESSN that need to be improved; one of which is its parameters namely the modulation factor (Mod), similarity factor (Sim) and threshold factor (C) that have to be manually tuned for optimal values that are suitable for any particular problem. The objective of the proposed work is to automatically determine the optimum values of the ESNN parameters for various datasets by integrating the Firefly Algorithm (FA) optimizer into the ESNN training phase and adaptively searching for the best parameter values. In this study, FA has been modified and improved, and was applied to improve the accuracy of ESNN structure and rates of classification accuracy. Five benchmark datasets from University of California, Irvine (UCI) Machine Learning Repository, have been used to measure the effectiveness of the integration model. Performance analysis of the proposed work was conducted by calculating classification accuracy, and compared with other parameter optimisation methods. The results from the experimentation have proven that the proposed algorithms have attained the optimal parameters values for ESNN

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Dynamically reconfigurable bio-inspired hardware

    Get PDF
    During the last several years, reconfigurable computing devices have experienced an impressive development in their resource availability, speed, and configurability. Currently, commercial FPGAs offer the possibility of self-reconfiguring by partially modifying their configuration bitstream, providing high architectural flexibility, while guaranteeing high performance. These configurability features have received special interest from computer architects: one can find several reconfigurable coprocessor architectures for cryptographic algorithms, image processing, automotive applications, and different general purpose functions. On the other hand we have bio-inspired hardware, a large research field taking inspiration from living beings in order to design hardware systems, which includes diverse topics: evolvable hardware, neural hardware, cellular automata, and fuzzy hardware, among others. Living beings are well known for their high adaptability to environmental changes, featuring very flexible adaptations at several levels. Bio-inspired hardware systems require such flexibility to be provided by the hardware platform on which the system is implemented. In general, bio-inspired hardware has been implemented on both custom and commercial hardware platforms. These custom platforms are specifically designed for supporting bio-inspired hardware systems, typically featuring special cellular architectures and enhanced reconfigurability capabilities; an example is their partial and dynamic reconfigurability. These aspects are very well appreciated for providing the performance and the high architectural flexibility required by bio-inspired systems. However, the availability and the very high costs of such custom devices make them only accessible to a very few research groups. Even though some commercial FPGAs provide enhanced reconfigurability features such as partial and dynamic reconfiguration, their utilization is still in its early stages and they are not well supported by FPGA vendors, thus making their use difficult to include in existing bio-inspired systems. In this thesis, I present a set of architectures, techniques, and methodologies for benefiting from the configurability advantages of current commercial FPGAs in the design of bio-inspired hardware systems. Among the presented architectures there are neural networks, spiking neuron models, fuzzy systems, cellular automata and random boolean networks. For these architectures, I propose several adaptation techniques for parametric and topological adaptation, such as hebbian learning, evolutionary and co-evolutionary algorithms, and particle swarm optimization. Finally, as case study I consider the implementation of bio-inspired hardware systems in two platforms: YaMoR (Yet another Modular Robot) and ROPES (Reconfigurable Object for Pervasive Systems); the development of both platforms having been co-supervised in the framework of this thesis

    Spiking neurons in 3D growing self-organising maps

    Get PDF
    In Kohonen’s Self-Organising Maps (SOM) learning, preserving the map topology to simulate the actual input features appears to be a significant process. Misinterpretation of the training samples can lead to failure in identifying the important features that may affect the outcomes generated by the SOM model. Nonetheless, it is a challenging task as most of the real problems are composed of complex and insufficient data. Spiking Neural Network (SNN) is the third generation of Artificial Neural Network (ANN), in which information can be transferred from one neuron to another using spike, processed, and trigger response as output. This study, hence, embedded spiking neurons for SOM learning in order to enhance the learning process. The proposed method was divided into five main phases. Phase 1 investigated issues related to SOM learning algorithm, while in Phase 2; datasets were collected for analyses carried out in Phase 3, wherein neural coding scheme for data representation process was implemented in the classification task. Next, in Phase 4, the spiking SOM model was designed, developed, and evaluated using classification accuracy rate and quantisation error. The outcomes showed that the proposed model had successfully attained exceptional classification accuracy rate with low quantisation error to preserve the quality of the generated map based on original input data. Lastly, in the final phase, a Spiking 3D Growing SOM is proposed to address the surface reconstruction issue by enhancing the spiking SOM using 3D map structure in SOM algorithm with a growing grid mechanism. The application of spiking neurons to enhance the performance of SOM is relevant in this study due to its ability to spike and to send a reaction when special features are identified based on its learning of the presented datasets. The study outcomes contribute to the enhancement of SOM in learning the patterns of the datasets, as well as in proposing a better tool for data analysis

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Computational aspects of cellular intelligence and their role in artificial intelligence.

    Get PDF
    The work presented in this thesis is concerned with an exploration of the computational aspects of the primitive intelligence associated with single-celled organisms. The main aim is to explore this Cellular Intelligence and its role within Artificial Intelligence. The findings of an extensive literature search into the biological characteristics, properties and mechanisms associated with Cellular Intelligence, its underlying machinery - Cell Signalling Networks and the existing computational methods used to capture it are reported. The results of this search are then used to fashion the development of a versatile new connectionist representation, termed the Artificial Reaction Network (ARN). The ARN belongs to the branch of Artificial Life known as Artificial Chemistry and has properties in common with both Artificial Intelligence and Systems Biology techniques, including: Artificial Neural Networks, Artificial Biochemical Networks, Gene Regulatory Networks, Random Boolean Networks, Petri Nets, and S-Systems. The thesis outlines the following original work: The ARN is used to model the chemotaxis pathway of Escherichia coli and is shown to capture emergent characteristics associated with this organism and Cellular Intelligence more generally. The computational properties of the ARN and its applications in robotic control are explored by combining functional motifs found in biochemical network to create temporal changing waveforms which control the gaits of limbed robots. This system is then extended into a complete control system by combining pattern recognition with limb control in a single ARN. The results show that the ARN can offer increased flexibility over existing methods. Multiple distributed cell-like ARN based agents termed Cytobots are created. These are first used to simulate aggregating cells based on the slime mould Dictyostelium discoideum. The Cytobots are shown to capture emergent behaviour arising from multiple stigmergic interactions. Applications of Cytobots within swarm robotics are investigated by applying them to benchmark search problems and to the task of cleaning up a simulated oil spill. The results are compared to those of established optimization algorithms using similar cell inspired strategies, and to other robotic agent strategies. Consideration is given to the advantages and disadvantages of the technique and suggestions are made for future work in the area. The report concludes that the Artificial Reaction Network is a versatile and powerful technique which has application in both simulation of chemical systems, and in robotic control, where it can offer a higher degree of flexibility and computational efficiency than benchmark alternatives. Furthermore, it provides a tool which may possibly throw further light on the origins and limitations of the primitive intelligence associated with cells
    corecore