703 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Optimization of ANN Structure Using Adaptive PSO & GA and Performance Analysis Based on Boolean Identities

    Get PDF
    In this paper, a novel heuristic structure optimization technique is proposed for Neural Network using Adaptive PSO & GA on Boolean identities to improve the performance of Artificial Neural Network (ANN). The selection of the optimal number of hidden layers and nodes has a significant impact on the performance of a neural network, is decided in an adhoc manner. The optimization of architecture and weights of neural network is a complex task. In this regard the use of evolutionary techniques based on Adaptive Particle Swarm Optimization (APSO) & Adaptive Genetic Algorithm (AGA) is used for selecting an optimal number of hidden layers and nodes of the neural controller, for better performance and low training errors through Boolean identities. The hidden nodes are adapted through the generation until they reach the optimal number. The Boolean operators such as AND, OR, XOR have been used for performance analysis of this technique

    Learning Functions Generated by Randomly Initialized MLPs and SRNs

    Get PDF
    In this paper, nonlinear functions generated by randomly initialized multilayer perceptrons (MLPs) and simultaneous recurrent neural networks (SRNs) and two benchmark functions are learned by MLPs and SRNs. Training SRNs is a challenging task and a new learning algorithm - PSO-QI is introduced. PSO-QI is a standard particle swarm optimization (PSO) algorithm with the addition of a quantum step utilizing the probability density property of a quantum particle. The results from PSO-QI are compared with the standard backpropagation (BP) and PSO algorithms. It is further verified that functions generated by SRNs are harder to learn than those generated by MLPs but PSO-QI provides learning capabilities of these functions by MLPs and SRNs compared to BP and PSO

    Ensemble Models in Forecasting Financial Markets

    Get PDF

    A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks

    Get PDF
    info:eu-repo/grantAgreement/FCT/3599-PPCDT/PTDC%2FCCI-INF%2F29168%2F2017/PT" Custode, L. L., Tecce, C. L., Bakurov, I., Castelli, M., Cioppa, A. D., & Vanneschi, L. (2020). A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks. In P. A. Castillo, J. L. Jiménez Laredo, & F. Fernández de Vega (Eds.), Applications of Evolutionary Computation - 23rd European Conference, EvoApplications 2020, Held as Part of EvoStar 2020, Proceedings (pp. 513-529). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12104 LNCS). Springer. https://doi.org/10.1007/978-3-030-43722-0_33In recent years neuroevolution has become a dynamic and rapidly growing research field. Interest in this discipline is motivated by the need to create ad-hoc networks, the topology and parameters of which are optimized, according to the particular problem at hand. Although neuroevolution-based techniques can contribute fundamentally to improving the performance of artificial neural networks (ANNs), they present a drawback, related to the massive amount of computational resources needed. This paper proposes a novel population-based framework, aimed at finding the optimal set of synaptic weights for ANNs. The proposed method partitions the weights of a given network and, using an optimization heuristic, trains one layer at each step while “freezing” the remaining weights. In the experimental study, particle swarm optimization (PSO) was used as the underlying optimizer within the framework and its performance was compared against the standard training (i.e., training that considers the whole set of weights) of the network with PSO and the backward propagation of the errors (backpropagation). Results show that the subsequent training of sub-spaces reduces training time, achieves better generalizability, and leads to the exhibition of smaller variance in the architectural aspects of the network.authorsversionpublishe

    Comparison of Particle Swarm Optimization and Backpropagation as Training Algorithms for Neural Networks

    Get PDF
    Particle swarm optimization (PSO) motivated by the social behavior of organisms, is a step up to existing evolutionary algorithms for optimization of continuous nonlinear functions. Backpropagation (BP) is generally used for neural network training. Choosing a proper algorithm for training a neural network is very important. In this paper, a comparative study is made on the computational requirements of the PSO and BP as training algorithms for neural networks. Results are presented for a feedforward neural network learning a nonlinear function and these results show that the feedforward neural network weights converge faster with the PSO than with the BP algorithm

    Use of backpropagation and differential evolution algorithms to training MLPs

    Get PDF
    Artificial Neural Networks (ANNs) are often used (trained) to find a general solution in problems where a pattern needs to be extracted, such as data classification. Feedforward (FFNN) is one of the ANN architectures and multilayer perceptron (MLP) is a type of FFNN. Based on gradient descent, backpropagation (BP) is one of the most used algorithms for MLP training. Evolutionary algorithms can be also used to train MLPs, including Differential Evolution (DE) algorithm. In this paper, BP and DE are used to train MLPs and they are both compared in four different approaches: (a) backpropagation, (b) DE with fixed parameter values, (c) DE with adaptive parameter values and (d) a hybrid alternative using both DE+BP algorithms. © 2013 IEEE
    corecore