75 research outputs found

    Hyperheuristics for explicit resource partitioning in simultaneous multithreaded processors

    Get PDF

    Hyper-heuristic decision tree induction

    Get PDF
    A hyper-heuristic is any algorithm that searches or operates in the space of heuristics as opposed to the space of solutions. Hyper-heuristics are increasingly used in function and combinatorial optimization. Rather than attempt to solve a problem using a fixed heuristic, a hyper-heuristic approach attempts to find a combination of heuristics that solve a problem (and in turn may be directly suitable for a class of problem instances). Hyper-heuristics have been little explored in data mining. This work presents novel hyper-heuristic approaches to data mining, by searching a space of attribute selection criteria for decision tree building algorithm. The search is conducted by a genetic algorithm. The result of the hyper-heuristic search in this case is a strategy for selecting attributes while building decision trees. Most hyper-heuristics work by trying to adapt the heuristic to the state of the problem being solved. Our hyper-heuristic is no different. It employs a strategy for adapting the heuristic used to build decision tree nodes according to some set of features of the training set it is working on. We introduce, explore and evaluate five different ways in which this problem state can be represented for a hyper-heuristic that operates within a decisiontree building algorithm. In each case, the hyper-heuristic is guided by a rule set that tries to map features of the data set to be split by the decision tree building algorithm to a heuristic to be used for splitting the same data set. We also explore and evaluate three different sets of low-level heuristics that could be employed by such a hyper-heuristic. This work also makes a distinction between specialist hyper-heuristics and generalist hyper-heuristics. The main difference between these two hyperheuristcs is the number of training sets used by the hyper-heuristic genetic algorithm. Specialist hyper-heuristics are created using a single data set from a particular domain for evolving the hyper-heurisic rule set. Such algorithms are expected to outperform standard algorithms on the kind of data set used by the hyper-heuristic genetic algorithm. Generalist hyper-heuristics are trained on multiple data sets from different domains and are expected to deliver a robust and competitive performance over these data sets when compared to standard algorithms. We evaluate both approaches for each kind of hyper-heuristic presented in this thesis. We use both real data sets as well as synthetic data sets. Our results suggest that none of the hyper-heuristics presented in this work are suited for specialization – in most cases, the hyper-heuristic’s performance on the data set it was specialized for was not significantly better than that of the best performing standard algorithm. On the other hand, the generalist hyper-heuristics delivered results that were very competitive to the best standard methods. In some cases we even achieved a significantly better overall performance than all of the standard methods

    A tutorial for competent memetic algorithms: Model, taxonomy and design issues

    Get PDF
    The combination of evolutionary algorithms with local search was named "memetic algorithms" (MAs) (Moscato, 1989). These methods are inspired by models of natural systems that combine the evolutionary adaptation of a population with individual learning within the lifetimes of its members. Additionally, MAs are inspired by Richard Dawkin's concept of a meme, which represents a unit of cultural evolution that can exhibit local refinement (Dawkins, 1976). In the case of MA's, "memes" refer to the strategies (e.g., local refinement, perturbation, or constructive methods, etc.) that are employed to improve individuals. In this paper, we review some works on the application of MAs to well-known combinatorial optimization problems, and place them in a framework defined by a general syntactic model. This model provides us with a classification scheme based on a computable index D, which facilitates algorithmic comparisons and suggests areas for future research. Also, by having an abstract model for this class of metaheuristics, it is possible to explore their design space and better understand their behavior from a theoretical standpoint. We illustrate the theoretical and practical relevance of this model and taxonomy for MAs in the context of a discussion of important design issues that must be addressed to produce effective and efficient MAs

    A case study of controlling crossover in a selection hyper-heuristic framework using the multidimensional knapsack problem

    Get PDF
    Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain. © 2016 Massachusetts Institute of Technolog

    Ant algorithm hyperheuristic approaches for scheduling problems

    Get PDF
    For decades, optimisation research has investigated methods to find optimal solutions to many problems in the fields of scheduling, timetabling and rostering. A family of abstract methods known as metaheuristics have been developed and applied to many of these problems, but their application to specific problems requires problem-specific coding and parameter adjusting to produce the best results for that problem. Such specialisation makes code difficult to adapt to new problem instances or new problems. One methodology that intended to increase the generality of state of the art algorithms is known as hyperheuristics. Hyperheuristics are algorithms which construct algorithms: using "building block" heuristics, the higher-level algorithm chooses between heuristics to move around the solution space, learning how to use the heuristics to find better solutions. We introduce a new hyperheuristic based upon the well-known ant algorithm metaheuristic, and apply it towards several real-world problems without parameter tuning, producing results that are competitive with other hyperheuristic methods and established bespoke metaheuristic techniques

    Ant algorithm hyperheuristic approaches for scheduling problems

    Get PDF
    For decades, optimisation research has investigated methods to find optimal solutions to many problems in the fields of scheduling, timetabling and rostering. A family of abstract methods known as metaheuristics have been developed and applied to many of these problems, but their application to specific problems requires problem-specific coding and parameter adjusting to produce the best results for that problem. Such specialisation makes code difficult to adapt to new problem instances or new problems. One methodology that intended to increase the generality of state of the art algorithms is known as hyperheuristics. Hyperheuristics are algorithms which construct algorithms: using "building block" heuristics, the higher-level algorithm chooses between heuristics to move around the solution space, learning how to use the heuristics to find better solutions. We introduce a new hyperheuristic based upon the well-known ant algorithm metaheuristic, and apply it towards several real-world problems without parameter tuning, producing results that are competitive with other hyperheuristic methods and established bespoke metaheuristic techniques

    A multi-objective and evolutionary hyper-heuristic applied to the Integration and Test Order Problem

    Get PDF
    The field of Search-Based Software Engineering (SBSE) has widely utilized Multi-Objective Evolutionary Algorithms (MOEAs) to solve complex software engineering problems. However, the use of such algorithms can be a hard task for the software engineer, mainly due to the significant range of parameter and algorithm choices. To help in this task, the use of Hyper-heuristics is recommended. Hyper-heuristics can select or generate low-level heuristics while optimization algorithms are executed, and thus can be generically applied. Despite their benefits, we find only a few works using hyper-heuristics in the SBSE field. Considering this fact, we describe HITO, a Hyper-heuristic for the Integration and Test Order Problem, to adaptively select search operators while MOEAs are executed using one of the selection methods: Choice Function and Multi-Armed Bandit. The experimental results show that HITO can outperform the traditional MOEAs NSGA-II and MOEA/DD. HITO is also a generic algorithm, since the user does not need to select crossover and mutation operators, nor adjust their parameters

    A multi-objective hyper-heuristic based on choice function

    Get PDF
    Hyper-heuristics are emerging methodologies that perform a search over the space of heuristics in an attempt to solve difficult computational optimization problems. We present a learning selection choice function based hyper-heuristic to solve multi-objective optimization problems. This high level approach controls and combines the strengths of three well-known multi-objective evolutionary algorithms (i.e. NSGAII, SPEA2 and MOGA), utilizing them as the low level heuristics. The performance of the proposed learning hyper-heuristic is investigated on the Walking Fish Group test suite which is a common benchmark for multi-objective optimization. Additionally, the proposed hyper-heuristic is applied to the vehicle crashworthiness design problem as a real-world multi-objective problem. The experimental results demonstrate the effectiveness of the hyper-heuristic approach when compared to the performance of each low level heuristic run on its own, as well as being compared to other approaches including an adaptive multi-method search, namely AMALGAM

    Treasure hunt : a framework for cooperative, distributed parallel optimization

    Get PDF
    Orientador: Prof. Dr. Daniel WeingaertnerCoorientadora: Profa. Dra. Myriam Regattieri DelgadoTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 27/05/2019Inclui referências: p. 18-20Área de concentração: Ciência da ComputaçãoResumo: Este trabalho propõe um framework multinível chamado Treasure Hunt, que é capaz de distribuir algoritmos de busca independentes para um grande número de nós de processamento. Com o objetivo de obter uma convergência conjunta entre os nós, este framework propõe um mecanismo de direcionamento que controla suavemente a cooperação entre múltiplas instâncias independentes do Treasure Hunt. A topologia em árvore proposta pelo Treasure Hunt garante a rápida propagação da informação pelos nós, ao mesmo tempo em que provê simutaneamente explorações (pelos nós-pai) e intensificações (pelos nós-filho), em vários níveis de granularidade, independentemente do número de nós na árvore. O Treasure Hunt tem boa tolerância à falhas e está parcialmente preparado para uma total tolerância à falhas. Como parte dos métodos desenvolvidos durante este trabalho, um método automatizado de Particionamento Iterativo foi proposto para controlar o balanceamento entre explorações e intensificações ao longo da busca. Uma Modelagem de Estabilização de Convergência para operar em modo Online também foi proposto, com o objetivo de encontrar pontos de parada com bom custo/benefício para os algoritmos de otimização que executam dentro das instâncias do Treasure Hunt. Experimentos em benchmarks clássicos, aleatórios e de competição, de vários tamanhos e complexidades, usando os algoritmos de busca PSO, DE e CCPSO2, mostram que o Treasure Hunt melhora as características inerentes destes algoritmos de busca. O Treasure Hunt faz com que os algoritmos de baixa performance se tornem comparáveis aos de boa performance, e os algoritmos de boa performance possam estender seus limites até problemas maiores. Experimentos distribuindo instâncias do Treasure Hunt, em uma rede cooperativa de até 160 processos, demonstram a escalabilidade robusta do framework, apresentando melhoras nos resultados mesmo quando o tempo de processamento é fixado (wall-clock) para todas as instâncias distribuídas do Treasure Hunt. Resultados demonstram que o mecanismo de amostragem fornecido pelo Treasure Hunt, aliado à maior cooperação entre as múltiplas populações em evolução, reduzem a necessidade de grandes populações e de algoritmos de busca complexos. Isto é especialmente importante em problemas de mundo real que possuem funções de fitness muito custosas. Palavras-chave: Inteligência artificial. Métodos de otimização. Algoritmos distribuídos. Modelagem de convergência. Alta dimensionalidade.Abstract: This work proposes a multilevel framework called Treasure Hunt, which is capable of distributing independent search algorithms to a large number of processing nodes. Aiming to obtain joint convergences between working nodes, Treasure Hunt proposes a driving mechanism that smoothly controls the cooperation between the multiple independent Treasure Hunt instances. The tree topology proposed by Treasure Hunt ensures quick propagation of information, while providing simultaneous explorations (by parents) and exploitations (by children), on several levels of granularity, regardless the number of nodes in the tree. Treasure Hunt has good fault tolerance and is partially prepared to full fault tolerance. As part of the methods developed during this work, an automated Iterative Partitioning method is proposed to control the balance between exploration and exploitation as the search progress. A Convergence Stabilization Modeling to operate in Online mode is also proposed, aiming to find good cost/benefit stopping points for the optimization algorithms running within the Treasure Hunt instances. Experiments on classic, random and competition benchmarks of various sizes and complexities, using the search algorithms PSO, DE and CCPSO2, show that Treasure Hunt boosts the inherent characteristics of these search algorithms. Treasure Hunt makes algorithms with poor performances to become comparable to good ones, and algorithms with good performances to be capable of extending their limits to larger problems. Experiments distributing Treasure Hunt instances in a cooperative network up to 160 processes show the robust scaling of the framework, presenting improved results even when fixing a wall-clock time for the instances. Results show that the sampling mechanism provided by Treasure Hunt, allied to the increased cooperation between multiple evolving populations, reduce the need for large population sizes and complex search algorithms. This is specially important on real-world problems with time-consuming fitness functions. Keywords: Artificial intelligence. Optimization methods. Distributed algorithms. Convergence modeling. High dimensionality

    A comparative study of fuzzy parameter control in a general purpose local search metaheuristic

    Get PDF
    There is a growing number of studies on general purpose metaheuristics that are directly applicable to multiple domains. Parameter setting is a particular issue considering that many of such search methods come with a set of parameters to be configured. Fuzzy logic has been used extensively in control applications and is known for its ability to handle uncertainty. In this study, we investigate the potential of using fuzzy systems to control the parameter settings of a threshold accepting (TA) metaheuristic for improving the overall effectiveness of a cross-domain approach. We have evaluated the performance of various general purpose local search metaheuristics which mix multiple heuristics at random and apply the TA metaheuristic with fixed threshold, crisp (non-fuzzy) rule-based control of the threshold and various fuzzy systems controlling the threshold. The empirical results show that the approach using the TA with crisp rule-based control performs the best across six problem domains from a benchmark
    corecore