16 research outputs found

    Bandit-based cooperative coevolution for tackling contribution imbalance in large-scale optimization problems

    Get PDF
    This paper addresses the issue of computational resource allocation within the context of cooperative coevolution. Cooperative coevolution typically works by breaking a problem down into smaller subproblems (or components) and coevolving them in a round-robin fashion, resulting in a uniform resource allocation among its components. Despite its success on a wide range of problems, cooperative coevolution struggles to perform efficiently when its components do not contribute equally to the overall objective value. This is of crucial importance on large-scale optimization problems where such difference are further magnified. To resolve this imbalance problem, we extend the standard cooperative coevolution to a new generic framework capable of learning the contribution of each component using multi-armed bandit techniques. The new framework allocates the computational resources to each component proportional to their contributions towards improving the overall objective value. This approach results in a more economical use of the limited computational resources. We study different aspects of the proposed framework in the light of extensive experiments. Our empirical results confirm that even a simple bandit-based credit assignment scheme can significantly improve the performance of cooperative coevolution on large-scale continuous problems, leading to competitive performance as compared to the state-of-the-art algorithms

    Towards a more efficient use of computational budget in large-scale black-box optimization

    Get PDF
    Evolutionary algorithms are general purpose optimizers that have been shown effective in solving a variety of challenging optimization problems. In contrast to mathematical programming models, evolutionary algorithms do not require derivative information and are still effective when the algebraic formula of the given problem is unavailable. Nevertheless, the rapid advances in science and technology have witnessed the emergence of more complex optimization problems than ever, which pose significant challenges to traditional optimization methods. The dimensionality of the search space of an optimization problem when the available computational budget is limited is one of the main contributors to its difficulty and complexity. This so-called curse of dimensionality can significantly affect the efficiency and effectiveness of optimization methods including evolutionary algorithms. This research aims to study two topics related to a more efficient use of computational budget in evolutionary algorithms when solving large-scale black-box optimization problems. More specifically, we study the role of population initializers in saving the computational resource, and computational budget allocation in cooperative coevolutionary algorithms. Consequently, this dissertation consists of two major parts, each of which relates to one of these research directions. In the first part, we review several population initialization techniques that have been used in evolutionary algorithms. Then, we categorize them from different perspectives. The contribution of each category to improving evolutionary algorithms in solving large-scale problems is measured. We also study the mutual effect of population size and initialization technique on the performance of evolutionary techniques when dealing with large-scale problems. Finally, assuming uniformity of initial population as a key contributor in saving a significant part of the computational budget, we investigate whether achieving a high-level of uniformity in high-dimensional spaces is feasible given the practical restriction in computational resources. In the second part of the thesis, we study the large-scale imbalanced problems. In many real world applications, a large problem may consist of subproblems with different degrees of difficulty and importance. In addition, the solution to each subproblem may contribute differently to the overall objective value of the final solution. When the computational budget is restricted, which is the case in many practical problems, investing the same portion of resources in optimizing each of these imbalanced subproblems is not the most efficient strategy. Therefore, we examine several ways to learn the contribution of each subproblem, and then, dynamically allocate the limited computational resources in solving each of them according to its contribution to the overall objective value of the final solution. To demonstrate the effectiveness of the proposed framework, we design a new set of 40 large-scale imbalanced problems and study the performance of some possible instances of the framework

    Efficient Resource Allocation in Cooperative Co-Evolution for Large-Scale Global Optimization

    Get PDF
    Cooperative co-evolution (CC) is an explicit means of problem decomposition in multipopulation evolutionary algorithms for solving large-scale optimization problems. For CC, subpopulations representing subcomponents of a large-scale optimization problem co-evolve, and are likely to have different contributions to the improvement of the best overall solution to the problem. Hence, it makes sense that more computational resources should be allocated to the subpopulations with greater contributions. In this paper, we study how to allocate computational resources in this context and subsequently propose a new CC framework named CCFR to efficiently allocate computational resources among the subpopulations according to their dynamic contributions to the improvement of the objective value of the best overall solution. Our experimental results suggest that CCFR can make efficient use of computational resources and is a highly competitive CCFR for solving large-scale optimization problems

    A review of population-based metaheuristics for large-scale black-box global optimization: Part B

    Get PDF
    This paper is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely decomposition methods and hybridization methods such as memetic algorithms and local search. In this part we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multi-objective optimization, constraint handling, overlapping components, the component imbalance issue, and benchmarks, and applications. The paper also includes a discussion on pitfalls and challenges of current research and identifies several potential areas of future research

    Benchmarking Continuous Dynamic Optimization: Survey and Generalized Test Suite

    Get PDF
    Dynamic changes are an important and inescapable aspect of many real-world optimization problems. Designing algorithms to find and track desirable solutions while facing challenges of dynamic optimization problems is an active research topic in the field of swarm and evolutionary computation. To evaluate and compare the performance of algorithms, it is imperative to use a suitable benchmark that generates problem instances with different controllable characteristics. In this paper, we give a comprehensive review of existing benchmarks and investigate their shortcomings in capturing different problem features. We then propose a highly configurable benchmark suite, the generalized moving peaks benchmark, capable of generating problem instances whose components have a variety of properties such as different levels of ill-conditioning, variable interactions, shape, and complexity. Moreover, components generated by the proposed benchmark can be highly dynamic with respect to the gradients, heights, optimum locations, condition numbers, shapes, complexities, and variable interactions. Finally, several well-known optimizers and dynamic optimization algorithms are chosen to solve generated problems by the proposed benchmark. The experimental results show the poor performance of the existing methods in facing new challenges posed by the addition of new properties

    Evolutionary computation for expensive optimization: a survey

    Get PDF
    Expensive optimization problem (EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation (EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently

    Complementary Layered Learning

    Get PDF
    Layered learning is a machine learning paradigm used to develop autonomous robotic-based agents by decomposing a complex task into simpler subtasks and learns each sequentially. Although the paradigm continues to have success in multiple domains, performance can be unexpectedly unsatisfactory. Using Boolean-logic problems and autonomous agent navigation, we show poor performance is due to the learner forgetting how to perform earlier learned subtasks too quickly (favoring plasticity) or having difficulty learning new things (favoring stability). We demonstrate that this imbalance can hinder learning so that task performance is no better than that of a suboptimal learning technique, monolithic learning, which does not use decomposition. Through the resulting analyses, we have identified factors that can lead to imbalance and their negative effects, providing a deeper understanding of stability and plasticity in decomposition-based approaches, such as layered learning. To combat the negative effects of the imbalance, a complementary learning system is applied to layered learning. The new technique augments the original learning approach with dual storage region policies to preserve useful information from being removed from an agent’s policy prematurely. Through multi-agent experiments, a 28% task performance increase is obtained with the proposed augmentations over the original technique

    Automated Machine Learning for Multi-Label Classification

    Get PDF

    Adaptive operator search for the capacitated arc routine problem

    Get PDF
    Evolutionary Computation approaches for Combinatorial Optimization have been successfully proposed for a plethora of different NP-Hard Problems. This research area has achieved acknowledgeable results and obtained remarkable progresses, and it has ultimately established itself as one of the most studied in AI. Yet, predicting the approximation ability of Evolutionary Algorithms (EAs) on novel problem instances remains a difficult easy task. As a consequence, their application in a real-world optimization context is reduced, as EAs are often considered not reliable and mature enough to be adopted in an industrial scenario. This thesis proposes new approaches to endow such meta-heuristics with a mechanism that would allow them to extract information during the search and to adaptively use such information in order to modify their behaviour and ultimately improve their performances. We consider the case study of the Capacitated Arc Routing Problem (CARP), to demonstrate the effectiveness of adaptive search techniques in a complex problem deeply connected with real-world scenarios. In particular, the main contributions of this thesis are: 1. An investigation of the adoption of a Parameter Tuning mechanism to adaptively choose the crossover operator that is used during the search; 2. The study of a novel Adaptive Operator Selection technique based on the use of Fitness Landscape Analysis techniques and on Online Learning; 3. A novel approach based on Knowledge Incorporation focusing on the reuse of information learned from the execution of a meta-heuristic on past instances, that is later used to improve the performances on the newly encountered
    corecore