874 research outputs found

    Benchmarking for Metaheuristic Black-Box Optimization: Perspectives and Open Challenges

    Full text link
    Research on new optimization algorithms is often funded based on the motivation that such algorithms might improve the capabilities to deal with real-world and industrially relevant optimization challenges. Besides a huge variety of different evolutionary and metaheuristic optimization algorithms, also a large number of test problems and benchmark suites have been developed and used for comparative assessments of algorithms, in the context of global, continuous, and black-box optimization. For many of the commonly used synthetic benchmark problems or artificial fitness landscapes, there are however, no methods available, to relate the resulting algorithm performance assessments to technologically relevant real-world optimization problems, or vice versa. Also, from a theoretical perspective, many of the commonly used benchmark problems and approaches have little to no generalization value. Based on a mini-review of publications with critical comments, advice, and new approaches, this communication aims to give a constructive perspective on several open challenges and prospective research directions related to systematic and generalizable benchmarking for black-box optimization

    The Anglerfish algorithm: A derivation of randomized incremental construction technique for solving the traveling salesman problem

    Get PDF
    Combinatorial optimization focuses on arriving at a globally optimal solution given constraints, incomplete information and limited computational resources. The combination of possible solutions are rather vast and often overwhelms the limited computational power. Smart algorithms have been developed to address this issue. Each offers a more efficient way of traversing the search landscapes. Critics have called for a realignment in the bio-inspired metaheuristics field. We propose an algorithm that simplifies the search operation to only randomized population initialization following the Randomized Incremental Construction Technique, which essentially compartmentalizes optimization into smaller sub-units. This relieves the need of complex operators normally imposed on the current metaheuristics pool. The algorithm is more generic and adaptable to any optimization problems. Benchmarking is conducted using the traveling salesman problem. The results are comparable with the results of advanced metaheuristic algorithms. Hence, suggesting that arbitrary exploration is practicable as an operator to solve optimization problems. © 2018, Springer-Verlag GmbH Germany, part of Springer Nature

    Frequency Fitness Assignment: Optimization without Bias for Good Solutions can be Efficient

    Full text link
    A fitness assignment process transforms the features (such as the objective value) of a candidate solution to a scalar fitness, which then is the basis for selection. Under Frequency Fitness Assignment (FFA), the fitness corresponding to an objective value is its encounter frequency in selection steps and is subject to minimization. FFA creates algorithms that are not biased towards better solutions and are invariant under all injective transformations of the objective function value. We investigate the impact of FFA on the performance of two theory-inspired, state-of-the-art EAs, the Greedy (2+1) GA and the Self-Adjusting (1+(lambda,lambda)) GA. FFA improves their performance significantly on some problems that are hard for them. In our experiments, one FFA-based algorithm exhibited mean runtimes that appear to be polynomial on the theory-based benchmark problems in our study, including traps, jumps, and plateaus. We propose two hybrid approaches that use both direct and FFA-based optimization and find that they perform well. All FFA-based algorithms also perform better on satisfiability problems than any of the pure algorithm variants

    Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the Mutation Rate of an Evolutionary Algorithm

    Full text link
    It is well known that evolutionary algorithms (EAs) achieve peak performance only when their parameters are suitably tuned to the given problem. Even more, it is known that the best parameter values can change during the optimization process. Parameter control mechanisms are techniques developed to identify and to track these values. Recently, a series of rigorous theoretical works confirmed the superiority of several parameter control techniques over EAs with best possible static parameters. Among these results are examples for controlling the mutation rate of the (1+λ)(1+\lambda)~EA when optimizing the OneMax problem. However, it was shown in [Rodionova et al., GECCO'19] that the quality of these techniques strongly depends on the offspring population size λ\lambda. We introduce in this work a new hybrid parameter control technique, which combines the well-known one-fifth success rule with Q-learning. We demonstrate that our HQL mechanism achieves equal or superior performance to all techniques tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous parameter control methods -- simultaneously for all offspring population sizes λ\lambda. We also show that the promising performance of HQL is not restricted to OneMax, but extends to several other benchmark problems.Comment: To appear in the Proceedings of Parallel Problem Solving from Nature (PPSN'2020

    A survey of multi-population optimization algorithms for tracking the moving optimum in dynamic environments

    Get PDF
    The solution spaces of many real-world optimization problems change over time. Such problems are called dynamic optimization problems (DOPs), which pose unique challenges that necessitate adaptive strategies from optimization algorithms to maintain optimal performance and responsiveness to environmental changes. Tracking the moving optimum (TMO) is an important class of DOPs where the goal is to identify and deploy the best-found solution in each environments Multi-population dynamic optimization algorithms are particularly effective at solving TMOs due to their flexible structures and potential for adaptability. These algorithms are usually complex methods that are built by assembling multiple components, each of which is responsible for addressing a specific challenge or improving the tracking performance in response to changes. This survey provides an in-depth review of multi-population dynamic optimization algorithms, focusing on describing these algorithms as a set of multiple cooperating components, the synergy between these components, and their collective effectiveness and/or efficiency in addressing the challenges of TMOs. Additionally, this survey reviews benchmarking practices within this domain and outlines promising directions for future research

    Benchmarking Continuous Dynamic Optimization: Survey and Generalized Test Suite

    Get PDF
    Dynamic changes are an important and inescapable aspect of many real-world optimization problems. Designing algorithms to find and track desirable solutions while facing challenges of dynamic optimization problems is an active research topic in the field of swarm and evolutionary computation. To evaluate and compare the performance of algorithms, it is imperative to use a suitable benchmark that generates problem instances with different controllable characteristics. In this paper, we give a comprehensive review of existing benchmarks and investigate their shortcomings in capturing different problem features. We then propose a highly configurable benchmark suite, the generalized moving peaks benchmark, capable of generating problem instances whose components have a variety of properties such as different levels of ill-conditioning, variable interactions, shape, and complexity. Moreover, components generated by the proposed benchmark can be highly dynamic with respect to the gradients, heights, optimum locations, condition numbers, shapes, complexities, and variable interactions. Finally, several well-known optimizers and dynamic optimization algorithms are chosen to solve generated problems by the proposed benchmark. The experimental results show the poor performance of the existing methods in facing new challenges posed by the addition of new properties

    Improving Time and Memory Efficiency of Genetic Algorithms by Storing Populations as Minimum Spanning Trees of Patches

    Full text link
    In many applications of evolutionary algorithms the computational cost of applying operators and storing populations is comparable to the cost of fitness evaluation. Furthermore, by knowing what exactly has changed in an individual by an operator, it is possible to recompute fitness value much more efficiently than from scratch. The associated time and memory improvements have been available for simple evolutionary algorithms, few specific genetic algorithms and in the context of gray-box optimization, but not for all algorithms, and the main reason is that it is difficult to achieve in algorithms using large arbitrarily structured populations. This paper makes a first step towards improving this situation. We show that storing the population as a minimum spanning tree, where vertices correspond to individuals but only contain meta-information about them, and edges store structural differences, or patches, between the individuals, is a viable alternative to the straightforward implementation. Our experiments suggest that significant, even asymptotic, improvements -- including execution of crossover operators! -- can be achieved in terms of both memory usage and computational costs.Comment: Accepted to the GECCO'23 conference, EvoSoft worksho

    Deep-ELA:Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multi-Objective Continuous Optimization Problems

    Get PDF
    In many recent works, the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize, in particular, single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks on continuous optimization problems, ranging, i.a., from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems would be impossible. Yet, despite their undisputed usefulness, ELA features suffer from several drawbacks. These include, in particular, (1.) a strong correlation between multiple features, as well as (2.) its very limited applicability to multi-objective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, e.g., point-cloud transformers were used to characterize an optimization problem's fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. Specifically, we pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multi-objective optimization problems. Our proposed framework can either be used out-of-the-box for analyzing single- and multi-objective continuous optimization problems, or subsequently fine-tuned to various tasks focussing on algorithm behavior and problem understanding
    corecore