7 research outputs found

    Grammar-based generation of variable-selection heuristics for constraint satisfaction problems

    Get PDF
    We propose a grammar-based genetic programming framework that generates variable-selection heuristics for solving constraint satisfaction problems. This approach can be considered as a generation hyper-heuristic. A grammar to express heuristics is extracted from successful human-designed variable-selection heuristics. The search is performed on the derivation sequences of this grammar using a strongly typed genetic programming framework. The approach brings two innovations to grammar-based hyper-heuristics in this domain: the incorporation of if-then-else rules to the function set, and the implementation of overloaded functions capable of handling different input dimensionality. Moreover, the heuristic search space is explored using not only evolutionary search, but also two alternative simpler strategies, namely, iterated local search and parallel hill climbing. We tested our approach on synthetic and real-world instances. The newly generated heuristics have an improved performance when compared against human-designed heuristics. Our results suggest that the constrained search space imposed by the proposed grammar is the main factor in the generation of good heuristics. However, to generate more general heuristics, the composition of the training set and the search methodology played an important role. We found that increasing the variability of the training set improved the generality of the evolved heuristics, and the evolutionary search strategy produced slightly better results

    A Classification of Hyper-heuristic Approaches

    Get PDF
    The current state of the art in hyper-heuristic research comprises a set of approaches that share the common goal of automating the design and adaptation of heuristic methods to solve hard computational search problems. The main goal is to produce more generally applicable search methodologies. In this chapter we present and overview of previous categorisations of hyper-heuristics and provide a unified classification and definition which captures the work that is being undertaken in this field. We distinguish between two main hyper-heuristic categories: heuristic selection and heuristic generation. Some representative examples of each category are discussed in detail. Our goal is to both clarify the main features of existing techniques and to suggest new directions for hyper-heuristic research

    A hyper-heuristic approach to automated generation of mutation operators for evolutionary programming

    Get PDF
    Evolutionary programming can solve black-box function optimisation problems by evolving a population of numerical vectors. The variation component in the evolutionary process is supplied by a mutation operator, which is typically a Gaussian, Cauchy, or Lévy probability distribution. In this paper, we use genetic programming to automatically generate mutation operators for an evolutionary programming system, testing the proposed approach over a set of function classes, which represent a source of functions. The empirical results over a set of benchmark function classes illustrate that genetic programming can evolve mutation operators which generalise well from the training set to the test set on each function class. The proposed method is able to outperform existing human designed mutation operators with statistical significance in most cases, with competitive results observed for the rest

    Hyper-heuristic decision tree induction

    Get PDF
    A hyper-heuristic is any algorithm that searches or operates in the space of heuristics as opposed to the space of solutions. Hyper-heuristics are increasingly used in function and combinatorial optimization. Rather than attempt to solve a problem using a fixed heuristic, a hyper-heuristic approach attempts to find a combination of heuristics that solve a problem (and in turn may be directly suitable for a class of problem instances). Hyper-heuristics have been little explored in data mining. This work presents novel hyper-heuristic approaches to data mining, by searching a space of attribute selection criteria for decision tree building algorithm. The search is conducted by a genetic algorithm. The result of the hyper-heuristic search in this case is a strategy for selecting attributes while building decision trees. Most hyper-heuristics work by trying to adapt the heuristic to the state of the problem being solved. Our hyper-heuristic is no different. It employs a strategy for adapting the heuristic used to build decision tree nodes according to some set of features of the training set it is working on. We introduce, explore and evaluate five different ways in which this problem state can be represented for a hyper-heuristic that operates within a decisiontree building algorithm. In each case, the hyper-heuristic is guided by a rule set that tries to map features of the data set to be split by the decision tree building algorithm to a heuristic to be used for splitting the same data set. We also explore and evaluate three different sets of low-level heuristics that could be employed by such a hyper-heuristic. This work also makes a distinction between specialist hyper-heuristics and generalist hyper-heuristics. The main difference between these two hyperheuristcs is the number of training sets used by the hyper-heuristic genetic algorithm. Specialist hyper-heuristics are created using a single data set from a particular domain for evolving the hyper-heurisic rule set. Such algorithms are expected to outperform standard algorithms on the kind of data set used by the hyper-heuristic genetic algorithm. Generalist hyper-heuristics are trained on multiple data sets from different domains and are expected to deliver a robust and competitive performance over these data sets when compared to standard algorithms. We evaluate both approaches for each kind of hyper-heuristic presented in this thesis. We use both real data sets as well as synthetic data sets. Our results suggest that none of the hyper-heuristics presented in this work are suited for specialization – in most cases, the hyper-heuristic’s performance on the data set it was specialized for was not significantly better than that of the best performing standard algorithm. On the other hand, the generalist hyper-heuristics delivered results that were very competitive to the best standard methods. In some cases we even achieved a significantly better overall performance than all of the standard methods

    A study of evoluntionary perturbative hyper-heuristics for the nurse rostering problem.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Pietermaritzburg 2017.Hyper-heuristics are an emerging field of study for combinatorial optimization. The aim of a hyper-heuristic is to produce good results across a set of problems rather than producing the best results. There has been little investigation of hyper-heuristics for the nurse rostering problem. The majority of hyper-heuristics for the nurse rostering problem fit into a single type of hyper-heuristic, the selection perturbative hyper-heuristic. There is no work in using evolutionary algorithms employed as selection perturbative hyper-heuristics for the nurse rostering problem. There is also no work in using the generative perturbative type of hyper-heuristic for the nurse rostering problem. The first objective of this dissertation is to investigate the selection perturbative hyper-heuristic for the nurse rostering problem and the effectiveness of employing an evolutionary algorithm (SPHH). The second objective is to investigate a generative perturbative hyper-heuristic to evolve perturbation heuristics for the nurse rostering problem using genetic programming (GPHH). The third objective is to compare the performance of SPHH and GPHH. SPHH and GPHH were evaluated using the INRC2010 benchmark data set and the results obtained were compared to available results from literature. The INRC2010 benchmark set is comprised of sprint, medium and long instance types. SPHH and GPHH produced good results for the INRC2010 benchmark data set. GPHH and SPHH were found to have different strengths and weaknesses. SPHH found better results than GPHH for the medium instances. GPHH found better results than SPHH for the long instances. SPHH produced better average results. GPHH produced results that were closer to the best known results. These results suggest future research should investigate combining SPHH and GPHH to benefit from the strengths of both perturbative hyper-heuristics

    Field Guide to Genetic Programming

    Get PDF
    corecore