892 research outputs found

    Multi - objective cooperative neuro - evolution of recurrent neural networks for time series prediction

    Get PDF
    Cooperative coevolution is an evolutionary computation method which solves a problem by decomposing it into smaller subcomponents. Multi-objective optimization deals with conflicting objectives and produces multiple optimal solutions instead of a single global optimal solution. In previous work, a multi-objective cooperative co-evolutionary method was introduced for training feedforward neural networks on time series problems. In this paper, the same method is used for training recurrent neural networks. The proposed approach is tested on time series problems in which the different time-lags represent the different objectives. Multiple pre-processed datasets distinguished by their time-lags are used for training and testing. This results in the discovery of a single neural network that can correctly give predictions for data pre-processed using different time-lags. The method is tested on several benchmark time series problems on which it gives a competitive performance in comparison to the methods in the literature

    Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time - series prediction

    Get PDF
    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature

    Identification of minimal timespan problem for recurrent neural networks with application to cyclone wind - intensity prediction

    Get PDF
    Time series prediction relies on past data points to make robust predictions. The span of past data points is important for some applications since prediction will not be possible unless the minimal timespan of the data points is available. This is a problem for cyclone wind-intensity prediction, where prediction needs to be made as a cyclone is identified. This paper presents an empirical study on minimal timespan required for robust prediction using Elman recurrent neural networks. Two different training methods are evaluated for training Elman recurrent network that includes cooperative coevolution and backpropagation-though time. They are applied to the prediction of the wind intensity in cyclones that took place in the South Pacific over past few decades. The results show that a minimal timespan is an important factor that leads to the measure of robustness in prediction performance and strategies should be taken in cases when the minimal timespan is needed

    Learning Bayesian networks using evolutionary computation and its application in classification.

    Get PDF
    by Lee Shing-yan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 126-133).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Problem Statement --- p.4Chapter 1.2 --- Contributions --- p.4Chapter 1.3 --- Thesis Organization --- p.5Chapter 2 --- Background --- p.7Chapter 2.1 --- Bayesian Networks --- p.7Chapter 2.1.1 --- A Simple Example [42] --- p.8Chapter 2.1.2 --- Formal Description and Notations --- p.9Chapter 2.1.3 --- Learning Bayesian Network from Data --- p.14Chapter 2.1.4 --- Inference on Bayesian Networks --- p.18Chapter 2.1.5 --- Applications of Bayesian Networks --- p.19Chapter 2.2 --- Bayesian Network Classifiers --- p.20Chapter 2.2.1 --- The Classification Problem in General --- p.20Chapter 2.2.2 --- Bayesian Classifiers --- p.21Chapter 2.2.3 --- Bayesian Network Classifiers --- p.22Chapter 2.3 --- Evolutionary Computation --- p.28Chapter 2.3.1 --- Four Kinds of Evolutionary Computation --- p.29Chapter 2.3.2 --- Cooperative Coevolution --- p.31Chapter 3 --- Bayesian Network Learning Algorithms --- p.33Chapter 3.1 --- Related Work --- p.34Chapter 3.1.1 --- Using GA --- p.34Chapter 3.1.2 --- Using EP --- p.36Chapter 3.1.3 --- Criticism of the Previous Approaches --- p.37Chapter 3.2 --- Two New Strategies --- p.38Chapter 3.2.1 --- A Hybrid Framework --- p.38Chapter 3.2.2 --- A New Operator --- p.39Chapter 3.3 --- CCGA --- p.44Chapter 3.3.1 --- The Algorithm --- p.45Chapter 3.3.2 --- CI Test Phase --- p.46Chapter 3.3.3 --- Cooperative Coevolution Search Phase --- p.47Chapter 3.4 --- HEP --- p.52Chapter 3.4.1 --- A Novel Realization of the Hybrid Framework --- p.54Chapter 3.4.2 --- Merging in HEP --- p.55Chapter 3.4.3 --- Prevention of Cycle Formation --- p.55Chapter 3.5 --- Summary --- p.56Chapter 4 --- Evaluation of Proposed Learning Algorithms --- p.57Chapter 4.1 --- Experimental Methodology --- p.57Chapter 4.2 --- Comparing the Learning Algorithms --- p.61Chapter 4.2.1 --- Comparing CCGA with MDLEP --- p.63Chapter 4.2.2 --- Comparing HEP with MDLEP --- p.65Chapter 4.2.3 --- Comparing CCGA with HEP --- p.68Chapter 4.3 --- Performance Analysis of CCGA --- p.70Chapter 4.3.1 --- Effect of Different α --- p.70Chapter 4.3.2 --- Effect of Different Population Sizes --- p.72Chapter 4.3.3 --- Effect of Varying Crossover and Mutation Probabilities --- p.73Chapter 4.3.4 --- Effect of Varying Belief Factor --- p.76Chapter 4.4 --- Performance Analysis of HEP --- p.77Chapter 4.4.1 --- The Hybrid Framework and the Merge Operator --- p.77Chapter 4.4.2 --- Effect of Different Population Sizes --- p.80Chapter 4.4.3 --- Effect of Different --- p.81Chapter 4.4.4 --- Efficiency of the Merge Operator --- p.84Chapter 4.5 --- Summary --- p.85Chapter 5 --- Learning Bayesian Network Classifiers --- p.87Chapter 5.1 --- Issues in Learning Bayesian Network Classifiers --- p.88Chapter 5.2 --- The Multinet Classifier --- p.89Chapter 5.3 --- The Augmented Bayesian Network Classifier --- p.91Chapter 5.4 --- Experimental Methodology --- p.94Chapter 5.5 --- Experimental Results --- p.97Chapter 5.6 --- Discussion --- p.103Chapter 5.7 --- Application in Direct Marketing --- p.106Chapter 5.7.1 --- The Direct Marketing Problem --- p.106Chapter 5.7.2 --- Response Models --- p.108Chapter 5.7.3 --- Experiment --- p.109Chapter 5.8 --- Summary --- p.115Chapter 6 --- Conclusion --- p.116Chapter 6.1 --- Summary --- p.116Chapter 6.2 --- Future Work --- p.118Chapter A --- A Supplementary Parameter Study --- p.120Chapter A.1 --- Study on CCGA --- p.120Chapter A.1.1 --- Effect of Different α --- p.120Chapter A.1.2 --- Effect of Different Population Sizes --- p.121Chapter A.1.3 --- Effect of Varying Crossover and Mutation Probabilities --- p.121Chapter A.1.4 --- Effect of Varying Belief Factor --- p.122Chapter A.2 --- Study on HEP --- p.123Chapter A.2.1 --- The Hybrid Framework and the Merge Operator --- p.123Chapter A.2.2 --- Effect of Different Population Sizes --- p.124Chapter A.2.3 --- Effect of Different Δα --- p.124Chapter A.2.4 --- Efficiency of the Merge Operator --- p.12
    • …
    corecore