152 research outputs found

    A review of population-based metaheuristics for large-scale black-box global optimization: Part B

    Get PDF
    This paper is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely decomposition methods and hybridization methods such as memetic algorithms and local search. In this part we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multi-objective optimization, constraint handling, overlapping components, the component imbalance issue, and benchmarks, and applications. The paper also includes a discussion on pitfalls and challenges of current research and identifies several potential areas of future research

    A review of population-based metaheuristics for large-scale black-box global optimization: Part A

    Get PDF
    Scalability of optimization algorithms is a major challenge in coping with the ever growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird’s-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part of the series covers two major algorithmic approaches to large-scale global optimization: problem decomposition and memetic algorithms. Part of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally touches upon the pitfalls and challenges of current research and identifies several potential areas for future research

    DG2: A Faster and More Accurate Differential Grouping for Large-Scale Black-Box Optimization

    Get PDF
    Identification of variable interaction is essential for an efficient implementation of a divide-and-conquer algorithm for large-scale black-box optimization. In this paper, we propose an improved variant of the differential grouping (DG) algorithm, which has a better efficiency and grouping accuracy. The proposed algorithm, DG2, finds a reliable threshold value by estimating the magnitude of roundoff errors. With respect to efficiency, DG2 reuses the sample points that are generated for detecting interactions and saves up to half of the computational resources on fully separable functions. We mathematically show that the new sampling technique achieves the lower bound with respect to the number of function evaluations. Unlike its predecessor, DG2 checks all possible pairs of variables for interactions and has the capacity to identify overlapping components of an objective function. On the accuracy aspect, DG2 outperforms the state-of-the-art decomposition methods on the latest large-scale continuous optimization benchmark suites. DG2 also performs reliably in the presence of imbalance among contribution of components in an objective function. Another major advantage of DG2 is the automatic calculation of its threshold parameter (Ï”\epsilon ), which makes it parameter-free. Finally, the experimental results show that when DG2 is used within a cooperative co-evolutionary framework, it can generate competitive results as compared to several state-of-the-art algorithms

    Particle swarm optimization for dynamically changing environments with particular focus on scalability and switching cost

    Get PDF
    Change is an inescapable aspect of natural and artificial systems, and adaptation is central to their resilience. Optimization problems are no exception to this maxim. Indeed, viability of businesses depends heavily on their effectiveness in responding to a change in the myriad of optimization problems they entail. Changes in optimization problems usually are result of change in the objective function and/or number of variables and/or constraints. Such optimization problems are denoted as dynamic optimization problems (DOPs) in the literature. Despite the large body of literature on DOPs and algorithms in this domain, there are still noticeable gaps between real-world DOPs and academic research. The first objective of this thesis is investigating DOPs to identify any class of DOPs or any DOPs' characteristics that are common in practical situation but have not been studied by the researchers. In this thesis, two important gaps are identified, namely considering switching cost in DOPs and large-scale DOPs. Both are common in many real-world dynamic problem but a few research investigated them in the past. In an attempt to bridge these gaps, this thesis makes the following contributions: First, this thesis considers the impact of cost for changing solutions after environmental changes. In fact, changing solutions in real-world problems is costly. Furthermore, larger changes have higher cost and need more resources such as time, human resources and energy. Thus, lack of switching cost consideration in most previous algorithms makes them unsuitable for many of real-world DOPs. In this thesis, different scenarios of DOPs with switching cost are investigated, their challenges are identified, and the performance of the state-of-the-art methods are investigated for solving them. Contributions include developing a novel robust optimization over time (ROOT) framework, a novel adaptive method for maximizing efficiency by changing or keeping solutions after environmental changes, and a novel multi-objective and time-linkage based method for minimizing switching cost. Second, this thesis investigates large-scale DOPs. Up to now, little attention has been given to the scalability of DOPs. Indeed, the dimension of typical DOPs studied in the literature hardly exceeds twenty. In this thesis, the challenges of large-scale DOPs are studied, then the efficiency of the current methods are investigated for solving them. Moreover, this thesis proposes a novel cooperative coevolution algorithm based on a multi-population approach which benefits from a new resource allocation method for DOPs with high-dimensional search space. All the proposed methods in this thesis use particle swarm optimization as the core optimizer embedded in a multi-population framework. The performance of the proposed methods are compared with state-of-the-art methods on a wide range of problem instances generated by the state-of-the-art and the proposed DOP benchmarks. The comparison results indicate the superiority of the proposed methods
    • 

    corecore