10 research outputs found

    Efficient Resource Allocation in Cooperative Co-Evolution for Large-Scale Global Optimization

    Get PDF
    Cooperative co-evolution (CC) is an explicit means of problem decomposition in multipopulation evolutionary algorithms for solving large-scale optimization problems. For CC, subpopulations representing subcomponents of a large-scale optimization problem co-evolve, and are likely to have different contributions to the improvement of the best overall solution to the problem. Hence, it makes sense that more computational resources should be allocated to the subpopulations with greater contributions. In this paper, we study how to allocate computational resources in this context and subsequently propose a new CC framework named CCFR to efficiently allocate computational resources among the subpopulations according to their dynamic contributions to the improvement of the objective value of the best overall solution. Our experimental results suggest that CCFR can make efficient use of computational resources and is a highly competitive CCFR for solving large-scale optimization problems

    Enhancing Cooperative Coevolution for Large Scale Optimization by Adaptively Constructing Surrogate Models

    Full text link
    It has been shown that cooperative coevolution (CC) can effectively deal with large scale optimization problems (LSOPs) through a divide-and-conquer strategy. However, its performance is severely restricted by the current context-vector-based sub-solution evaluation method since this method needs to access the original high dimensional simulation model when evaluating each sub-solution and thus requires many computation resources. To alleviate this issue, this study proposes an adaptive surrogate model assisted CC framework. This framework adaptively constructs surrogate models for different sub-problems by fully considering their characteristics. For the single dimensional sub-problems obtained through decomposition, accurate enough surrogate models can be obtained and used to find out the optimal solutions of the corresponding sub-problems directly. As for the nonseparable sub-problems, the surrogate models are employed to evaluate the corresponding sub-solutions, and the original simulation model is only adopted to reevaluate some good sub-solutions selected by surrogate models. By these means, the computation cost could be greatly reduced without significantly sacrificing evaluation quality. Empirical studies on IEEE CEC 2010 benchmark functions show that the concrete algorithm based on this framework is able to find much better solutions than the conventional CC algorithms and a non-CC algorithm even with much fewer computation resources.Comment: arXiv admin note: text overlap with arXiv:1802.0974

    Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations

    Full text link
    Given the ubiquity of non-separable optimization problems in real worlds, in this paper we analyze and extend the large-scale version of the well-known cooperative coevolution (CC), a divide-and-conquer optimization framework, on non-separable functions. First, we reveal empirical reasons of why decomposition-based methods are preferred or not in practice on some non-separable large-scale problems, which have not been clearly pointed out in many previous CC papers. Then, we formalize CC to a continuous game model via simplification, but without losing its essential property. Different from previous evolutionary game theory for CC, our new model provides a much simpler but useful viewpoint to analyze its convergence, since only the pure Nash equilibrium concept is needed and more general fitness landscapes can be explicitly considered. Based on convergence analyses, we propose a hierarchical decomposition strategy for better generalization, as for any decomposition there is a risk of getting trapped into a suboptimal Nash equilibrium. Finally, we use powerful distributed computing to accelerate it under the multi-level learning framework, which combines the fine-tuning ability from decomposition with the invariance property of CMA-ES. Experiments on a set of high-dimensional functions validate both its search performance and scalability (w.r.t. CPU cores) on a clustering computing platform with 400 CPU cores

    Hierarchical Multi-Agent Optimization for Resource Allocation in Cloud Computing

    Get PDF
    In cloud computing, an important concern is to allocate the available resources of service nodes to the requested tasks on demand and to make the objective function optimum, i.e., maximizing resource utilization, payoffs and available bandwidth. This paper proposes a hierarchical multi-agent optimization (HMAO) algorithm in order to maximize the resource utilization and make the bandwidth cost minimum for cloud computing. The proposed HMAO algorithm is a combination of the genetic algorithm (GA) and the multi-agent optimization (MAO) algorithm. With maximizing the resource utilization, an improved GA is implemented to find a set of service nodes that are used to deploy the requested tasks. A decentralized-based MAO algorithm is presented to minimize the bandwidth cost. We study the effect of key parameters of the HMAO algorithm by the Taguchi method and evaluate the performance results. When compared with genetic algorithm (GA) and fast elitist non-dominated sorting genetic (NSGA-II) algorithm, the simulation results demonstrate that the HMAO algorithm is more effective than the existing solutions to solve the problem of resource allocation with a large number of the requested tasks. Furthermore, we provide the performance comparison of the HMAO algorithm with the first-fit greedy approach in on-line resource allocation

    A Species-based Particle Swarm Optimization with Adaptive Population Size and Deactivation of Species for Dynamic Optimization Problems

    Get PDF
    Population clustering methods, which consider the position and fitness of individuals to form sub-populations in multi-population algorithms, have shown high efficiency in tracking the moving global optimum in dynamic optimization problems. However, most of these methods use a fixed population size, making them inflexible and inefficient when the number of promising regions is unknown. The lack of a functional relationship between the population size and the number of promising regions significantly degrades performance and limits an algorithm’s agility to respond to dynamic changes. To address this issue, we propose a new species-based particle swarm optimization with adaptive population size and number of sub-populations for solving dynamic optimization problems. The proposed algorithm also benefits from a novel systematic adaptive deactivation component that, unlike the previous deactivation components, adapts the computational resource allocation to the sub-populations by considering various characteristics of both the problem and the sub-populations. We evaluate the performance of our proposed algorithm for the Generalized Moving Peaks Benchmark and compare the results with several peer approaches. The results indicate the superiority of the proposed method

    Multi-Guide Particle Swarm Optimization for Large-Scale Multi-Objective Optimization Problems

    Get PDF
    Multi-guide particle swarm optimization (MGPSO) is a novel metaheuristic for multi-objective optimization based on particle swarm optimization (PSO). MGPSO has been shown to be competitive when compared with other state-of-the-art multi-objective optimization algorithms for low-dimensional problems. However, to the best of the author’s knowledge, the suitability of MGPSO for high-dimensional multi-objective optimization problems has not been studied. One goal of this thesis is to provide a scalability study of MGPSO in order to evaluate its efficacy for high-dimensional multi-objective optimization problems. It is observed that while MGPSO has comparable performance to state-of-the-art multi-objective optimization algorithms, it experiences a performance drop with the increase in the problem dimensionality. Therefore, a main contribution of this work is a new scalable MGPSO-based algorithm, termed cooperative co-evolutionary multi-guide particle swarm optimization (CCMGPSO), that incorporates ideas from cooperative PSOs. A detailed empirical study on well-known benchmark problems comparing the proposed improved approach with various state-of-the-art multi-objective optimization algorithms is done. Results show that the proposed CCMGPSO is highly competitive for high-dimensional problems

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework
    corecore