5 research outputs found
Diversification and Intensification in Hybrid Metaheuristics for Constraint Satisfaction Problems
Metaheuristics are used to find feasible solutions to hard Combinatorial Optimization Problems (COPs). Constraint Satisfaction Problems (CSPs) may be formulated as COPs, where the objective is to reduce the number of violated constraints to zero. The popular puzzle Sudoku is an NP-complete problem that has been used to study the effectiveness of metaheuristics in solving CSPs. Applying the Simulated Annealing (SA) metaheuristic to Sudoku has been shown to be a successful method to solve CSPs. However, the âeasy-hard-easyâ phase-transition behavior frequently attributed to a certain class of CSPs makes finding a solution extremely difficult in the hard phase because of the vast search space, the small number of solutions and a fitness landscape marked by many plateaus and local minima. Two key mechanisms that metaheuristics employ for searching are diversification and intensification. Diversification is the method of identifying diverse promising regions of the search space and is achieved through the process of heating/reheating. Intensification is the method of finding a solution in one of these promising regions and is achieved through the process of cooling. The hard phase area of the search terrain makes traversal without becoming trapped very challenging. Running the best available method - a Constraint Propagation/Depth-First Search algorithm - against 30,000 benchmark problem-instances, 20,240 remain unsolved after ten runs at one minute per run which we classify as very hard. This dissertation studies the delicate balance between diversification and intensification in the search process and offers a hybrid SA algorithm to solve very hard instances. The algorithm presents (a) a heating/reheating strategy that incorporates the lowest solution cost for diversification; (b) a more complex two-stage cooling schedule for faster intensification; (c) Constraint Programming (CP) hybridization to reduce the search space and to escape a local minimum; (d) a three-way swap, secondary neighborhood operator for a low expense method of diversification. These techniques are tested individually and in hybrid combinations for a total of 11 strategies, and the effectiveness of each is evaluated by percentage solved and average best run-time to solution. In the final analysis, all strategies are an improvement on current methods, but the most remarkable results come from the application of the âQuick Resetâ technique between cooling stages
Artificial Darwinism: an overview
Genetic algorithms, genetic programming, evolution strategies, and what is now called evolutionary algorithms, are
stochastic optimisation techniques inspired by Darwinâs theory. We present here an overview of these techniques, while
stressing on the extreme versatility of the artificial evolution concept. Their applicative framework is very large and is not
limited to pure optimisation. Artifical evolution implementations are however computationally expensive: an efficient
tuning of the components and parameter of these algorithms should be based on a clear comprehension of the
evolutionary mechanisms. Moreover, it is noticeable that the killer-applications of the domain are for the most part based
on hybridisation with other optimisation techniques. As a consequence, evolutionary algorithms are not to be considered
in competition but rather in complement to the âclassical â optimisation techniques.Les algorithmes gĂ©nĂ©tiques, la programmation gĂ©nĂ©tique, les stratĂ©gies dâĂ©volution, et ce que lâon appelle
maintenant en gĂ©nĂ©ral les algorithmes Ă©volutionnaires, sont des techniques dâoptimisation stochastiques
inspirĂ©es de la thĂ©orie de lâĂ©volution selon Darwin. Nous donnons ici une vision globale de ces techniques,
en insistant sur lâextrĂȘme flexibilitĂ© du concept dâĂ©volution artificielle. Cet outil a un champ trĂšs vaste
dâapplications, qui ne se limite pas Ă lâoptimisation pure. Leur mise en oeuvre se fait cependant au prix dâun
coĂ»t calculatoire important, dâoĂč la nĂ©cessitĂ© de bien comprendre ces mĂ©canismes dâĂ©volution pour
adapter et régler efficacement les différentes composantes de ces algorithmes. Par ailleurs, on note que les
applications-phares de ce domaine sont assez souvent fondĂ©es sur une hybridation avec dâautres techniques
dâoptimisation. Les algorithmes Ă©volutionnaires ne sont donc pas Ă considĂ©rer comme une mĂ©thode
dâoptimisation concurrente des mĂ©thodes dâoptimisation classiques, mais plutĂŽt comme une approche
complémentaire
Genetic Algorithms using Grammatical Evolution
This thesis proposes a new representation for genetic algorithms, based on the idea of a genotype to phenotype mapping process. It allows the explicit encoding of the position and value of all the variables composing a problem, therefore disassociating each variable from its genotypic location. The GAuGE system (Genetic Algorithms using Grammatical Evolution) is developed using this mapping process. In a manner similar to Grammatical Evolution, it ensures that there is no under- nor over-specification of phenotypic variables, therefore always producing syntactically valid solutions. The process is simple to implement and independent of the search engine used; in this work, a genetic algorithm is employed. The formal definition of the mapping process, used in this work, provides a base for analysis of the system, at different levels. The system is applied to a series of benchmark problems, defining its main features and potential problem domains. A thorough analysis of its main characteristics is then presented, including its interaction with genetic operators, the effects of degeneracy, and the evolution of representation. This in-depth analysis highlights the systemâs aptitude for relative ordering problems, where not only the value of each variable is to be discovered, but also their correct permutation. Finally, the system is applied to the real-world problem of solving Sudoku puzzles, which are shown to be similar to instances of planning and scheduling problems, illustrating the class of problems for which GAuGE can prove to be a useful approach. The results obtained show a substantial improvement in performance, when compared to a standard genetic algorithm, and pave the way to new applications to problems exhibiting similar characteristics.Science Foundation Irelan
Genetic Algorithms Using Grammatical Evolution
Genetic Programming: 5th European Conference (EuroGP), Kinsale, Co. Cork, Ireland, 3-5 April 2002This paper describes the GAUGE system, Genetic Algorithms Using Grammatical Evolution. GAUGE is a position independent Genetic Algorithm that uses Grammatical Evolution with an attribute grammar to dictate what position a gene codes for. GAUGE suffers from neither under-specification nor over-specification, is guaranteed to produce syntactically correct individuals, and does not require any repair after the application of genetic operators. GAUGE is applied to the standard onemax problem, with results showing that its genotype to phenotype mapping and position independence nature do not affect its performance as a normal genetic algorithm. A new problem is also presented, a deceptive version of the Mastermind game, and we show that GAUGE possesses the position independence characteristics it claims, and outperforms several genetic algorithms, including the competent genetic algorithm messyGA