1,021 research outputs found

    Multi - objective cooperative neuro - evolution of recurrent neural networks for time series prediction

    Get PDF
    Cooperative coevolution is an evolutionary computation method which solves a problem by decomposing it into smaller subcomponents. Multi-objective optimization deals with conflicting objectives and produces multiple optimal solutions instead of a single global optimal solution. In previous work, a multi-objective cooperative co-evolutionary method was introduced for training feedforward neural networks on time series problems. In this paper, the same method is used for training recurrent neural networks. The proposed approach is tested on time series problems in which the different time-lags represent the different objectives. Multiple pre-processed datasets distinguished by their time-lags are used for training and testing. This results in the discovery of a single neural network that can correctly give predictions for data pre-processed using different time-lags. The method is tested on several benchmark time series problems on which it gives a competitive performance in comparison to the methods in the literature

    Memetic cooperative coevolution of Elman recurrent neural networks

    Get PDF
    Cooperative coevolution decomposes an optimi- sation problem into subcomponents and collectively solves them using evolutionary algorithms. Memetic algorithms provides enhancement to evolutionary algorithms with local search. Recently, the incorporation of local search into a memetic cooperative coevolution method has shown to be efficient for training feedforward networks on pattern classification problems. This paper applies the memetic cooperative coevolution method for training recurrent neural networks on grammatical inference problems. The results show that the proposed method achieves better performance in terms of optimisation time and robustness

    Competitive two - island cooperative co - evolution for training feedforward neural networks for pattern classification problems

    Get PDF
    In the application of cooperative coevolution for neuro-evolution, problem decomposition methods rely on architectural properties of the neural network to divide it into subcomponents. During every stage of the evolutionary process, different problem decomposition methods yield unique characteristics that may be useful in an environment that enables solution sharing. In this paper, we implement a two-island competition environment in cooperative coevolution based neuro-evolution for feedforward neural networks for pattern classification problems. In particular the combinations of three problem decomposition methods that are based on the architectural properties that refers to neural level, network level and layer level decomposition. The experimental results show that the performance of the competition method is better than that of the standalone problem decomposition cooperative neuro-evolution methods

    Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time - series prediction

    Get PDF
    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature

    Enhancing competitive island cooperative neuro - evolution through backpropagation for pattern classification

    Get PDF
    Cooperative coevolution is a promising method for training neural networks which is also known as cooperative neuro-evolution. Cooperative neuro-evolution has been used for pattern classification, time series prediction and global optimisation problems. In the past, competitive island based cooperative coevolution has been proposed that employed different instances of problem decomposition methods for competition. Neuro-evolution has limitations in terms of training time although they are known as global search methods. Backpropagation algorithm employs gradient descent which helps in faster convergence which is needed for neuro-evolution. Backpropagation suffers from premature convergence and its combination with neuro-evolution can help eliminate the weakness of both the approaches. In this paper, we propose a competitive island cooperative neuro-evolutionary method that takes advantage of the strengths of gradient descent and neuro-evolution. We use feedforward neural networks on benchmark pattern classification problems to evaluate the performance of the proposed algorithm. The results show improved performance when compared to related methods

    Problem Decomposition and Adaptation in Cooperative Neuro-Evolution

    No full text
    One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness

    Cooperative neuro - evolution of Elman recurrent networks for tropical cyclone wind - intensity prediction in the South Pacific region

    Get PDF
    Climate change issues are continuously on the rise and the need to build models and software systems for management of natural disasters such as cyclones is increasing. Cyclone wind-intensity prediction looks into efficient models to forecast the wind-intensification in tropical cyclones which can be used as a means of taking precautionary measures. If the wind-intensity is determined with high precision a few hours prior, evacuation and further precautionary measures can take place. Neural networks have become popular as efficient tools for forecasting. Recent work in neuro-evolution of Elman recurrent neural network showed promising performance for benchmark problems. This paper employs Cooperative Coevolution method for training Elman recurrent neural networks for Cyclone wind- intensity prediction in the South Pacific region. The results show very promising performance in terms of prediction using different parameters in time series data reconstruction

    Darwinian Domain-Generality: The Role of Evolutionary Psychology in the Modularity Debate

    Get PDF
    Evolutionary Psychology (EP) tends to be associated with a Massively Modular (MM) cognitive architecture. I argue that EP favors a non-MM cognitive architecture. The main point of dispute is whether central cognition, such as abstract reasoning, exhibits domain-general properties. Partisans of EP argue that domain-specific modules govern central cognition, for it is unclear how the cognitive mind could have evolved domain-generality. In response, I defend a distinction between exogenous and endogenous selection pressures, according to which exogenous pressures tend to select for domain-specificity, whereas the latter, endogenous pressures, select in favor of domain-generality. I draw on models from brain network theory to motivate this distinction, and also to establish that a domain-general, non-MM cognitive architecture is the more parsimonious adaptive solution to endogenous pressures
    • …
    corecore