926 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Glowworm swarm optimisation for training multi-layer perceptrons

    Get PDF

    A hybrid advanced PSO-neural network system

    Full text link
    © 2019 IEEE. In this paper, a combination of Advanced Particle Swarm Optimization (APSO) and Neural Network are presented to compensate the drawbacks of both the techniques and utilize the strong attributes to form a hybrid system called Hybrid Advance Particle Swarm Optimization-Neural Network System (HAPSONNS). APSO is used for the training of the neural network. In the initial phases of the search, PSO has swift convergence for global optimum, but later it suffers from slow convergence around the global optimum position. On the contrary, the gradient method attains prior to convergence around the global optimum point, therefore, attaining better accuracy in terms of convergence. This paper elucidates the usage of APSO applied to feedforward neural network to improve the classification accuracy of the network and also decreases the network training time

    A hybrid framework for evaluating the performance of port container terminal operations: Moroccan case study

    Get PDF
    This work intends to integrate artificial neural network (ANN) and data envelopment analysis (DEA) in a single framework to evaluate the performance of operations in the container terminal. The proposed framework is based on three steps. In the first step, a proposed identify the performance measures objectives and the indicators affecting the system. In the second step, the efficiency scores of the system are computed by using the Charnes Cooper and Rhodes (CCR) model (oriented inputs). In the last step, the Moth Search Algorithm (MSA) is employed as a new method for training the Feedforward Neural Network (FNN) to determine the efficiency scores. To demonstrate the efficacy of the proposed framework, two container terminals of Tangier and Casablanca are adopted to evaluate the performance

    Grey Wolf Cuckoo Search Algorithm for Training Feedforward Neural Network and Logic Gates Design

    Get PDF
    This paper presents a new hybrid Swarm Intelligence (SI) algorithm based on the Cuckoo Search Algorithm (CSA) and Grey Wolf Optimizer (GWO) called the Grey Wolf Cuckoo Search (GWCS) algorithm. The GWCS algorithm extracts and combines CSA and GWO features for efficient optimization. To carry out the comprehensive validation, the developed algorithm is applied to three different scenarios with their counterparts. The first validation is carried out on standard optimization benchmark problems. Further, they are used to train Feedforward Neural Networks and finally applied to design logic gates. The comprehensive results are presented and it is found that the proposed GWCS algorithms perform better compared to the state-of-the-art

    樹状突起ニューロン計算および差分進化アルゴリズムに関する研究

    Get PDF
    富山大学・富理工博甲第118号・陳瑋・2017/03/23富山大学201

    Optimizing Weights And Biases in MLP Using Whale Optimization Algorithm

    Get PDF
    Artificial Neural Networks are intelligent and non-parametric mathematical models inspired by the human nervous system. They have been widely studied and applied for classification, pattern recognition and forecasting problems. The main challenge of training an Artificial Neural network is its learning process, the nonlinear nature and the unknown best set of main controlling parameters (weights and biases). When the Artificial Neural Networks are trained using the conventional training algorithm, they get caught in the local optima stagnation and slow convergence speed; this makes the stochastic optimization algorithm a definitive alternative to alleviate the drawbacks. This thesis proposes an algorithm based on the recently proposed Whale Optimization Algorithm(WOA). The algorithm has proven to solve a wide range of optimization problems and outperform existing algorithms. The successful implementation of this algorithm motivated our attempts to benchmark its performance in training feed-forward neural networks. We have taken a set of 20 datasets with different difficulty levels and tested the proposed WOA-MLP based trainer. Further, the results are verified by comparing WOA-MLP with the back propagation algorithms and six evolutionary techniques. The results have proved that the proposed trainer can outperform the current algorithms on the majority of datasets in terms of local optima avoidance and convergence speed
    corecore