97 research outputs found

    On the relationship of degree of separability with depth of evolution in decomposition for cooperative coevolution

    Get PDF
    Problem decomposition determines how subcomponents are created that have a vital role in the performance of cooperative coevolution. Cooperative coevolution naturally appeals to fully separable problems that have low interaction amongst subcomponents. The interaction amongst subcomponents is defined by the degree of separability. Typically, in cooperative coevolution, each subcomponent is implemented as a sub-population that is evolved in a round-robin fashion for a specified depth of evolution. This paper examines the relationship between the depth of evolution and degree of separability for different types of global optimisation problems. The results show that the depth of evolution is an important attribute that affects the performance of cooperative coevolution and can be used to ascertain the nature of the problem in terms of the degree of separability

    Competitive two - island cooperative co - evolution for training feedforward neural networks for pattern classification problems

    Get PDF
    In the application of cooperative coevolution for neuro-evolution, problem decomposition methods rely on architectural properties of the neural network to divide it into subcomponents. During every stage of the evolutionary process, different problem decomposition methods yield unique characteristics that may be useful in an environment that enables solution sharing. In this paper, we implement a two-island competition environment in cooperative coevolution based neuro-evolution for feedforward neural networks for pattern classification problems. In particular the combinations of three problem decomposition methods that are based on the architectural properties that refers to neural level, network level and layer level decomposition. The experimental results show that the performance of the competition method is better than that of the standalone problem decomposition cooperative neuro-evolution methods

    Problem Decomposition and Adaptation in Cooperative Neuro-Evolution

    No full text
    One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness

    Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time - series prediction

    Get PDF
    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature

    Cooperative co-evolution for feature selection in big data with random feature grouping

    Get PDF
    © 2020, The Author(s). A massive amount of data is generated with the evolution of modern technologies. This high-throughput data generation results in Big Data, which consist of many features (attributes). However, irrelevant features may degrade the classification performance of machine learning (ML) algorithms. Feature selection (FS) is a technique used to select a subset of relevant features that represent the dataset. Evolutionary algorithms (EAs) are widely used search strategies in this domain. A variant of EAs, called cooperative co-evolution (CC), which uses a divide-and-conquer approach, is a good choice for optimization problems. The existing solutions have poor performance because of some limitations, such as not considering feature interactions, dealing with only an even number of features, and decomposing the dataset statically. In this paper, a novel random feature grouping (RFG) has been introduced with its three variants to dynamically decompose Big Data datasets and to ensure the probability of grouping interacting features into the same subcomponent. RFG can be used in CC-based FS processes, hence called Cooperative Co-Evolutionary-Based Feature Selection with Random Feature Grouping (CCFSRFG). Experiment analysis was performed using six widely used ML classifiers on seven different datasets from the UCI ML repository and Princeton University Genomics repository with and without FS. The experimental results indicate that in most cases [i.e., with naïve Bayes (NB), support vector machine (SVM), k-Nearest Neighbor (k-NN), J48, and random forest (RF)] the proposed CCFSRFG-1 outperforms an existing solution (a CC-based FS, called CCEAFS) and CCFSRFG-2, and also when using all features in terms of accuracy, sensitivity, and specificity

    A review of population-based metaheuristics for large-scale black-box global optimization: Part A

    Get PDF
    Scalability of optimization algorithms is a major challenge in coping with the ever growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird’s-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part of the series covers two major algorithmic approaches to large-scale global optimization: problem decomposition and memetic algorithms. Part of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally touches upon the pitfalls and challenges of current research and identifies several potential areas for future research

    Multiple Relevant Feature Ensemble Selection Based on Multilayer Co-Evolutionary Consensus MapReduce

    Full text link
    IEEE Although feature selection for large data has been intensively investigated in data mining, machine learning, and pattern recognition, the challenges are not just to invent new algorithms to handle noisy and uncertain large data in applications, but rather to link the multiple relevant feature sources, structured, or unstructured, to develop an effective feature reduction method. In this paper, we propose a multiple relevant feature ensemble selection (MRFES) algorithm based on multilayer co-evolutionary consensus MapReduce (MCCM). We construct an effective MCCM model to handle feature ensemble selection of large-scale datasets with multiple relevant feature sources, and explore the unified consistency aggregation between the local solutions and global dominance solutions achieved by the co-evolutionary memeplexes, which participate in the cooperative feature ensemble selection process. This model attempts to reach a mutual decision agreement among co-evolutionary memeplexes, which calls for the need for mechanisms to detect some noncooperative co-evolutionary behaviors and achieve better Nash equilibrium resolutions. Extensive experimental comparative studies substantiate the effectiveness of MRFES to solve large-scale dataset problems with the complex noise and multiple relevant feature sources on some well-known benchmark datasets. The algorithm can greatly facilitate the selection of relevant feature subsets coming from the original feature space with better accuracy, efficiency, and interpretability. Moreover, we apply MRFES to human cerebral cortex-based classification prediction. Such successful applications are expected to significantly scale up classification prediction for large-scale and complex brain data in terms of efficiency and feasibility

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework
    • …
    corecore