774 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time - series prediction

    Get PDF
    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature

    Pareto multi-objective non-linear regression modelling to aid CAPM analogous forecasting

    Get PDF
    Copyright © 2002 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.2002 International Joint Conference on Neural Networks (IJCNN '02), Honolulu, Hawaii, 12-17 May 2002Recent studies confront the problem of multiple error terms through summation. However this implicitly assumes prior knowledge of the problem's error surface. This study constructs a population of Pareto optimal Neural Network regression models to describe a market generation process in relation to the forecasting of its risk and return

    Bibliometric Mapping of the Computational Intelligence Field

    Get PDF
    In this paper, a bibliometric study of the computational intelligence field is presented. Bibliometric maps showing the associations between the main concepts in the field are provided for the periods 1996–2000 and 2001–2005. Both the current structure of the field and the evolution of the field over the last decade are analyzed. In addition, a number of emerging areas in the field are identified. It turns out that computational intelligence can best be seen as a field that is structured around four important types of problems, namely control problems, classification problems, regression problems, and optimization problems. Within the computational intelligence field, the neural networks and fuzzy systems subfields are fairly intertwined, whereas the evolutionary computation subfield has a relatively independent position.neural networks;bibliometric mapping;fuzzy systems;bibliometrics;computational intelligence;evolutionary computation

    Competitive two - island cooperative co - evolution for training feedforward neural networks for pattern classification problems

    Get PDF
    In the application of cooperative coevolution for neuro-evolution, problem decomposition methods rely on architectural properties of the neural network to divide it into subcomponents. During every stage of the evolutionary process, different problem decomposition methods yield unique characteristics that may be useful in an environment that enables solution sharing. In this paper, we implement a two-island competition environment in cooperative coevolution based neuro-evolution for feedforward neural networks for pattern classification problems. In particular the combinations of three problem decomposition methods that are based on the architectural properties that refers to neural level, network level and layer level decomposition. The experimental results show that the performance of the competition method is better than that of the standalone problem decomposition cooperative neuro-evolution methods
    • …
    corecore