35 research outputs found
Comparing parameter tuning methods for evolutionary algorithms
Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a non-trivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research – hopefully inspiring fellow researchers for further work. Index Terms — evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVES Evolutionary Algorithms (EA) form a rich class of stochasti
A self-parametrization framework for meta-heuristics
Even while the scientific community has shown great interest in the analysis of meta-heuristics, the analysis of their parameterization has received little attention. It is the parameterization that will adapt a meta-heuristic to a problem, but it is still performed, mostly, empirically. There are multiple parameterization techniques; however, they are time-consuming, requiring considerable computational effort and they do not take advantage of the meta-heuristics that they parameterize. In order to approach the parameterization of meta-heuristics, in this paper, a self-parameterization framework is proposed. It will automatize the parameterization as an optimization problem, precluding the user from spending too much time on parameterization. The model will automate the parameterization through two meta-heuristics: A meta-heuristic of the solution space and one of the parameter space. To analyze the performance of the framework, a self-parameterization prototype was implemented. The prototype was compared and analyzed in a SP (scheduling problem) and in the TSP (traveling salesman problem). In the SP, the prototype found better solutions than those of the manually parameterized meta-heuristics, although the differences were not statistically significant. In the TSP, the self-parameterization prototype was more effective than the manually parameterized meta-heuristics, this time with statistically significant differences.This work was supported by national funds through the FCT - Fundação para a Ciência e
Tecnologia through the R&D Units Project Scopes: UIDB/00319/2020, and EXPL/EME-SIS/1224/2021
Comparison of Different Approaches for Adapting Mutation Probabilities in Genetic Algorithms
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given problem. In this paper we compare three different adaptive algorithms that include strategies to modify the mutation probability without external control. One adaptive strategy uses the genetic diversity present in the population to update the mutation probability. Other strategy is based on the ideas of reinforcement learning and the last one varies the probabilities of mutation depending on the fitness values of the solution. All these strategies eliminate a very expensive computational phase related to the pre-tuning of the algorithmic parameters. The empirical comparisons show that if the genetic algorithm uses the genetic diversity, as the strategy for adapting the mutation probability outperforms the other two strategies.XVII Workshop Agentes y Sistemas Inteligentes (WASI).Red de Universidades con Carreras en Informática (RedUNCI
Comparison of Different Approaches for Adapting Mutation Probabilities in Genetic Algorithms
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given problem. In this paper we compare three different adaptive algorithms that include strategies to modify the mutation probability without external control. One adaptive strategy uses the genetic diversity present in the population to update the mutation probability. Other strategy is based on the ideas of reinforcement learning and the last one varies the probabilities of mutation depending on the fitness values of the solution. All these strategies eliminate a very expensive computational phase related to the pre-tuning of the algorithmic parameters. The empirical comparisons show that if the genetic algorithm uses the genetic diversity, as the strategy for adapting the mutation probability outperforms the other two strategies.XVII Workshop Agentes y Sistemas Inteligentes (WASI).Red de Universidades con Carreras en Informática (RedUNCI
Quality Measures of Parameter Tuning for Aggregated Multi-Objective Temporal Planning
Parameter tuning is recognized today as a crucial ingredient when tackling an
optimization problem. Several meta-optimization methods have been proposed to
find the best parameter set for a given optimization algorithm and (set of)
problem instances. When the objective of the optimization is some scalar
quality of the solution given by the target algorithm, this quality is also
used as the basis for the quality of parameter sets. But in the case of
multi-objective optimization by aggregation, the set of solutions is given by
several single-objective runs with different weights on the objectives, and it
turns out that the hypervolume of the final population of each single-objective
run might be a better indicator of the global performance of the aggregation
method than the best fitness in its population. This paper discusses this issue
on a case study in multi-objective temporal planning using the evolutionary
planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how
ParamILS makes a difference between both approaches, and demonstrate that
indeed, in this context, using the hypervolume indicator as ParamILS target is
the best choice. Other issues pertaining to parameter tuning in the proposed
context are also discussed.Comment: arXiv admin note: substantial text overlap with arXiv:1305.116
MetaheurÃsticas secuenciales y distribuidas: adaptación de parámetros y entornos de ejecución
La contribución de esta lÃnea de investigación es proponer mecanismos adaptativos que puedan guiar el cambio de ciertos parámetros de los algoritmos evolutivos, tanto secuenciales como paralelos, usando información del estado del proceso de búsqueda. Otro aspecto importante es el referido al entorno de ejecución de estos algoritmos, a tales efectos se analiza la incidencia del uso de plataformas heterogéneas, tanto en la calidad de los resultados finales como en los tiempos de ejecución.Eje: Agentes y Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI
Comparison of Different Approaches for Adapting Mutation Probabilities in Genetic Algorithms
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given problem. In this paper we compare three different adaptive algorithms that include strategies to modify the mutation probability without external control. One adaptive strategy uses the genetic diversity present in the population to update the mutation probability. Other strategy is based on the ideas of reinforcement learning and the last one varies the probabilities of mutation depending on the fitness values of the solution. All these strategies eliminate a very expensive computational phase related to the pre-tuning of the algorithmic parameters. The empirical comparisons show that if the genetic algorithm uses the genetic diversity, as the strategy for adapting the mutation probability outperforms the other two strategies.XVII Workshop Agentes y Sistemas Inteligentes (WASI).Red de Universidades con Carreras en Informática (RedUNCI
A new strategy for adapting the mutation probability in genetic algorithms
Traditionally in Genetic Algorithms, the mutation probability parameter maintains a constant value during the search. However, an important difficulty is to determine a priori which probability value is the best suited for a given problem. Besides, there is a growing demand for up-to-date optimization software, applicable by a non-specialist within an industrial development environment. These issues encourage us to propose an adaptive evolutionary algorithm that includes a mechanism to modify the mutation probability without external control. This process of dynamic adaptation happens while the algorithm is searching for the problem solution. This eliminates a very expensive computational phase related to the pre-tuning of the algorithmic parameters. We compare the performance of our adaptive proposal against traditional genetic algorithms with fixed parameter values in a numerical way. The empirical comparisons, over a range of NK-Landscapes instances, show that a genetic algorithm incorporating a strategy for adapting the mutation probability outperforms the same algorithm using fixed mutation rates.Eje: Workshop Agentes y sistemas inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI
A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems
Hyper-heuristics are search methodologies that aim to provide high-quality solutions across a wide variety of problem domains, rather than developing tailor-made methodologies for each problem instance/domain. A traditional hyper-heuristic framework has two levels, namely, the high level strategy (heuristic selection mechanism and the acceptance criterion) and low level heuristics (a set of problem specific heuristics). Due to the different landscape structures of different problem instances, the high level strategy plays an important role in the design of a hyper-heuristic framework. In this paper, we propose a new high level strategy for a hyper-heuristic framework. The proposed high-level strategy utilizes a dynamic multiarmed bandit-extreme value-based reward as an online heuristic selection mechanism to select the appropriate heuristic to be applied at each iteration. In addition, we propose a gene expression programming framework to automatically generate the acceptance criterion for each problem instance, instead of using human-designed criteria. Two well-known, and very different, combinatorial optimization problems, one static (exam timetabling) and one dynamic (dynamic vehicle routing) are used to demonstrate the generality of the proposed framework. Compared with state-of-the-art hyper-heuristics and other bespoke methods, empirical results demonstrate that the proposed framework is able to generalize well across both domains. We obtain competitive, if not better results, when compared to the best known results obtained from other methods that have been presented in the scientific literature. We also compare our approach against the recently released hyper-heuristic competition test suite. We again demonstrate the generality of our approach when we compare against other methods that have utilized the same six benchmark datasets from this test suite