20,039 research outputs found

    Radial basis function network based on time variant multi-objective particle swarm optimization for medical diseases diagnosis

    Get PDF
    This paper proposes an adaptive evolutionary radial basis function (RBF) network algorithm to evolve accuracy and connections (centers and weights) of RBF networks simultaneously. The problem of hybrid learning of RBF network is discussed with the multi-objective optimization methods to improve classification accuracy for medical disease diagnosis. In this paper, we introduce a time variant multi-objective particle swarm optimization (TVMOPSO) of radial basis function (RBF) network for diagnosing the medical diseases. This study applied RBF network training to determine whether RBF networks can be developed using TVMOPSO, and the performance is validated based on accuracy and complexity. Our approach is tested on three standard data sets from UCI machine learning repository. The results show that our approach is a viable alternative and provides an effective means to solve multi-objective RBF network for medical disease diagnosis. It is better than RBF network based on MOPSO and NSGA-II, and also competitive with other methods in the literature

    Engine Data Classification with Simultaneous Recurrent Network using a Hybrid PSO-EA Algorithm

    Get PDF
    We applied an architecture which automates the design of simultaneous recurrent network (SRN) using a new evolutionary learning algorithm. This new evolutionary learning algorithm is based on a hybrid of particle swarm optimization (PSO) and evolutionary algorithm (EA). By combining the searching abilities of these two global optimization methods, the evolution of individuals is no longer restricted to be in the same generation, and better performed individuals may produce offspring to replace those with poor performance. The novel algorithm is then applied to the simultaneous recurrent network for the engine data classification. The experimental results show that our approach gives solid performance in categorizing the nonlinear car engine data

    Quantum-Inspired Particle Swarm Optimization for Feature Selection and Parameter Optimization in Evolving Spiking Neural Networks for Classification Tasks

    Get PDF
    Introduction: Particle Swarm Optimization (PSO) was introduced in 1995 by Russell Eberhart and James Kennedy (Eberhart & Kennedy, 1995). PSO is a biologically-inspired technique based around the study of collective behaviour in decentralized and self-organized animal society systems. The systems are typically made up from a population of candidates (particles) interacting with one another within their environment (swarm) to solve a given problem. Because of its efficiency and simplicity, PSO has been successfully applied as an optimizer in many applications such as function optimization, artificial neural network training, fuzzy system control. However, despite recent research and development, there is an opportunity to find the most effective methods for parameter optimization and feature selection tasks. This chapter deals with the problem of feature (variable) and parameter optimization for neural network models, utilising a proposed Quantum–inspired PSO (QiPSO) method. In this method the features of the model are represented probabilistically as a quantum bit (qubit) vector and the model parameter values as real numbers. The principles of quantum superposition and quantum probability are used to accelerate the search for an optimal set of features, that combined through co-evolution with a set of optimised parameter values, will result in a more accurate computational neural network model. The method has been applied to the problem of feature and parameter optimization in Evolving Spiking Neural Network (ESNN) for classification. A swarm of particles is used to find the most accurate classification model for a given classification task. The QiPSO will be integrated within ESNN where features and parameters are simultaneously and more efficiently optimized. A hybrid particle structure is required for the qubit and real number data types. In addition, an improved search strategy has been introduced to find the most relevant and eliminate the irrelevant features on a synthetic dataset. The method is tested on a benchmark classification problem. The proposed method results in the design of faster and more accurate neural network classification models than the ones optimised through the use of standard evolutionary optimization algorithms. This chapter is organized as follows. Section 2 introduces PSO with quantum information principles and an improved feature search strategy used later in the developed method. Section 3 is an overview of ESNN, while Section 4 gives details of the integrated structure and the experimental results. Finally, Section 5 concludes this chapter

    Designing Artificial Neural Network Using Particle Swarm Optimization: A Survey

    Get PDF
    Neural network modeling has become a special interest for many engineers and scientists to be utilized in different types of data as time series, regression, and classification and have been used to solve complicated practical problems in different areas, such as medicine, engineering, manufacturing, military, business. To utilize a prediction model that is based upon artificial neural network (ANN), some challenges should be addressed that optimal designing and training of ANN are major ones. ANN can be defined as an optimization task because it has many hyper parameters and weights that can be optimized. Metaheuristic algorithms such as swarm intelligence-based methods are a category of optimization methods that aim to find an optimal structure of ANN and to train the network by optimizing the weights. One of the commonly used swarm intelligence-based algorithms is particle swarm optimization (PSO) that can be used for optimizing ANN. In this study, we review the conducted research works on optimizing the ANNs using PSO. All studies are reviewed from two different perspectives: optimization of weights and optimization of structure and hyper parameters

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore