10 research outputs found

    The falling tide algorithm: A new multi-objective approach for complex workforce scheduling

    Get PDF
    We present a hybrid approach of goal programming and meta-heuristic search to find compromise solutions for a difficult employee scheduling problem, i.e. nurse rostering with many hard and soft constraints. By employing a goal programming model with different parameter settings in its objective function, we can easily obtain a coarse solution where only the system constraints (i.e. hard constraints) are satisfied and an ideal objective-value vector where each single goal (i.e. each soft constraint) reaches its optimal value. The coarse solution is generally unusable in practise, but it can act as an initial point for the subsequent meta-heuristic search to speed up the convergence. Also, the ideal objective-value vector is, of course, usually unachievable, but it can help a multi-criteria search method (i.e. compromise programming) to evaluate the fitness of obtained solutions more efficiently. By incorporating three distance metrics with changing weight vectors, we propose a new time-predefined meta-heuristic approach, which we call the falling tide algorithm, and apply it under a multi-objective framework to find various compromise solutions. By this approach, not only can we achieve a trade off between the computational time and the solution quality, but also we can achieve a trade off between the conflicting objectives to enable better decision-making

    A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems

    Get PDF
    Hyper-heuristics are search methodologies that aim to provide high-quality solutions across a wide variety of problem domains, rather than developing tailor-made methodologies for each problem instance/domain. A traditional hyper-heuristic framework has two levels, namely, the high level strategy (heuristic selection mechanism and the acceptance criterion) and low level heuristics (a set of problem specific heuristics). Due to the different landscape structures of different problem instances, the high level strategy plays an important role in the design of a hyper-heuristic framework. In this paper, we propose a new high level strategy for a hyper-heuristic framework. The proposed high-level strategy utilizes a dynamic multiarmed bandit-extreme value-based reward as an online heuristic selection mechanism to select the appropriate heuristic to be applied at each iteration. In addition, we propose a gene expression programming framework to automatically generate the acceptance criterion for each problem instance, instead of using human-designed criteria. Two well-known, and very different, combinatorial optimization problems, one static (exam timetabling) and one dynamic (dynamic vehicle routing) are used to demonstrate the generality of the proposed framework. Compared with state-of-the-art hyper-heuristics and other bespoke methods, empirical results demonstrate that the proposed framework is able to generalize well across both domains. We obtain competitive, if not better results, when compared to the best known results obtained from other methods that have been presented in the scientific literature. We also compare our approach against the recently released hyper-heuristic competition test suite. We again demonstrate the generality of our approach when we compare against other methods that have utilized the same six benchmark datasets from this test suite

    Hybridizing guided genetic algorithm and single-based metaheuristics to solve unrelated parallel machine scheduling problem with scarce resources

    Get PDF
    This paper focuses on solving unrelated parallel machine scheduling with resource constraints (UPMR). There are j jobs, and each job needs to be processed on one of the machines aim at minimizing the makespan. Besides the dependence of the machine, the processing time of any job depends on the usage of a rare renewable resource. A certain number of those resources (Rmax) can be disseminated to jobs for the purpose of processing them at any time, and each job j needs units of resources (rjm) when processing in machine m. When more resources are assigned to a job, the job processing time minimizes. However, the number of resources available is limited, and this makes the problem difficult to solve for a good quality solution. Genetic algorithm shows promising results in solving UPMR. However, genetic algorithm suffers from premature convergence, which could hinder the resulting quality. Therefore, the work hybridizes guided genetic algorithm (GGA) with a single-based metaheuristics (SBHs) to handle the premature convergence in the genetic algorithm with the aim to escape from the local optima and improve the solution quality further. The single-based metaheuristics replaces the mutation in the genetic algorithm. The evaluation of the algorithm performance was conducted through extensive experiments

    Channel Assignment in Cellular Communication Using a Great Deluge Hyper-Heuristic

    No full text
    This paper proposes a methodology for the channel assignment problem in the cellular communication industry. The problem considers the assignment of a limited channel bandwidth to satisfy a growing channel demand without violating electromagnetic interference constraints. The initial solution is generated using random constructive heuristic. This solution is then improved using a hyper-heuristic technique based on the great deluge algorithm. Our experimental results, on benchmarks data sets, gives promising results

    Hyper-heuristic decision tree induction

    Get PDF
    A hyper-heuristic is any algorithm that searches or operates in the space of heuristics as opposed to the space of solutions. Hyper-heuristics are increasingly used in function and combinatorial optimization. Rather than attempt to solve a problem using a fixed heuristic, a hyper-heuristic approach attempts to find a combination of heuristics that solve a problem (and in turn may be directly suitable for a class of problem instances). Hyper-heuristics have been little explored in data mining. This work presents novel hyper-heuristic approaches to data mining, by searching a space of attribute selection criteria for decision tree building algorithm. The search is conducted by a genetic algorithm. The result of the hyper-heuristic search in this case is a strategy for selecting attributes while building decision trees. Most hyper-heuristics work by trying to adapt the heuristic to the state of the problem being solved. Our hyper-heuristic is no different. It employs a strategy for adapting the heuristic used to build decision tree nodes according to some set of features of the training set it is working on. We introduce, explore and evaluate five different ways in which this problem state can be represented for a hyper-heuristic that operates within a decisiontree building algorithm. In each case, the hyper-heuristic is guided by a rule set that tries to map features of the data set to be split by the decision tree building algorithm to a heuristic to be used for splitting the same data set. We also explore and evaluate three different sets of low-level heuristics that could be employed by such a hyper-heuristic. This work also makes a distinction between specialist hyper-heuristics and generalist hyper-heuristics. The main difference between these two hyperheuristcs is the number of training sets used by the hyper-heuristic genetic algorithm. Specialist hyper-heuristics are created using a single data set from a particular domain for evolving the hyper-heurisic rule set. Such algorithms are expected to outperform standard algorithms on the kind of data set used by the hyper-heuristic genetic algorithm. Generalist hyper-heuristics are trained on multiple data sets from different domains and are expected to deliver a robust and competitive performance over these data sets when compared to standard algorithms. We evaluate both approaches for each kind of hyper-heuristic presented in this thesis. We use both real data sets as well as synthetic data sets. Our results suggest that none of the hyper-heuristics presented in this work are suited for specialization – in most cases, the hyper-heuristic’s performance on the data set it was specialized for was not significantly better than that of the best performing standard algorithm. On the other hand, the generalist hyper-heuristics delivered results that were very competitive to the best standard methods. In some cases we even achieved a significantly better overall performance than all of the standard methods

    Module reallocation problem in the context of multi-campus university course timetabling

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Novel Evolutionary-based Methods for the Robust Training of SVR and GMDH Regressors

    Get PDF
    En los últimos años se han consolidado una serie de diferentes métodos y algoritmos para problemas de aprendizaje máquina y optimización de sistemas, que han dado lugar a toda una corriente de investigación conocida como Soft-Computing. El término de Soft-Computing hace referencia a una colección de técnicas computacionales que intenta estudiar, modelar y analizar fenómenos muy complejos, para los que los métodos convencionales no proporcionan soluciones completas, o no las proporcionan en un tiempo razonable. Dentro de lo que se considera como Soft-Computing existen una gran cantidad de técnicas tales como Redes Neuronales, Máquinas de Vectores Soporte (SVM), Redes Bayesianas, Computación Evolutiva (Algoritmos Genéticos, Algoritmos Evolutivos etc), etc. La investigación de la Tesis está enfocada en dos de estas técnicas, en primer lugar las máquinas de vectores soporte de regresión (SVR) y en segundo lugar a las GMDH (Group Method of Data Handling). Las SVM son una técnica ideada por Vapnik, basada en el principio de minimización del riesgo estructural y la teoría de los métodos kernel, que a partir de un conjunto de datos construye una regla de decisión con la cual intentar predecir nuevos valores para dicho proceso a partir de nuevas entradas. La eficiencia de los sistemas SVM ha hecho que tengan un desarrollo muy significativo en los últimos años y se hayan utilizado en una gran cantidad de aplicaciones tanto para clasificación como para problemas de regresión (SVR). Uno de los principales problemas es la búsqueda de los que se conoce como hiper-parámetros. Estos parámetros no pueden ser calculados de forma exacta, por lo que se hace necesario testear un gran número de combinaciones, para obtener unos parámetros que generen una buena función de estimación. Debido a esto el tiempo de entrenamiento suele ser elevado y no siempre los parámetros encontrados generan una buena solución: ya sea porque el algoritmo de búsqueda tenga un pobre rendimiento o porque el modelo generado está sobre-entrenado. En esta Tesis se ha desarrollado un nuevo algoritmo de tipo evolutivo para el entrenamiento con kernel multi-paramétrico. Este nuevo algoritmo tiene en cuenta un parámetro distinto, para cada una de las dimensiones del espacio de entradas. En este caso, debido al incremento del número de parámetros no puede utilizarse una búsqueda en grid clásica, debido al coste computacional que conllevaría. Por ello, en esta Tesis se propone la utilización de un algoritmo evolutivo para la obtención de los valores óptimos de los parámetros de la SVR y la aplicación de nuevas cotas para los parámetros de este kernel multi-paramétrico. Junto con esto, se han desarrollado nuevos métodos de validación que mejoren el rendimiento de las técnicas de regresión en problemas data-driven. La idea es obtener mejores modelos en la fase de entrenamiento del algoritmo, de tal forma que el desempeño con el conjunto de test mejore, principalmente en lo que a tiempo de entrenamiento se refiere y en el rendimiento general del sistema, con respecto a otros métodos de validación clásicos como son K-Fold cross-validation, etc. El otro foco de investigación de esta Tesis se encuentra en la técnica GMDH, ideada en los años 70 por Ivakhnenko. Es un método particularmente útil para problemas que requieran bajos tiempos de entrenamiento. Es un algoritmo auto-organizado, donde el modelo se genera de forma adaptativa a partir de los datos, creciendo con el tiempo en complejidad y ajustándose al problema en cuestión, hasta que el modelo alcanza un grado de complejidad óptima, es decir, no es demasiado simple ni demasiado complejo. De esta forma el algoritmo construye el modelo en base a los datos de los que dispone y no a una idea preconcebida del investigador, como ocurre en la mayoría de las técnicas de Soft-Computing. Las GMDH también tienen algunos inconvenientes como son los errores debido al sobre-entrenamiento y la multicolinealidad, esto hace que en algunas ocasiones el error sea elevado si lo comparamos con otras técnicas. Esta Tesis propone un nuevo algoritmo de construcción de estas redes basado en un algoritmo de tipo hiper-heurístico. Esta aproximación es un concepto nuevo relacionado con la computación evolutiva, que codifica varios heurísticos que pueden ser utilizados de forma secuencial para resolver un problema de optimización. En nuestro caso particular, varios heurísticos básicos se codifican en un algoritmo evolutivo, para crear una solución hiper-heurística que permita construir redes GMDH robustas en problemas de regresión. Todas las propuestas y métodos desarrollados en esta Tesis han sido evaluados experimentalmente en problemas benchmark, así como en aplicaciones de regresión reales

    An investigation of novel approaches for optimising retail shelf space allocation

    Get PDF
    This thesis is concerned with real-world shelf space allocation problems that arise due to the conflict of limited shelf space availability and the large number of products that need to be displayed. Several important issues in the shelf space allocation problem are identified and two mathematical models are developed and studied. The first model deals with a general shelf space allocation problem while the second model specifically concerns shelf space allocation for fresh produce. Both models are closely related to the knapsack and bin packing problem. The thesis firstly studies a recently proposed generic search technique, hyper-heuristics, and introduces a simulated annealing acceptance criterion in order to improve its performance. The proposed algorithm, called simulated annealing hyper-heuristics, is initially tested on the one-dimensional bin packing problem, with very promising and competitive results being produced. The algorithm is then applied to the general shelf space allocation problem. The computational results show that the proposed algorithm is superior to a general simulated annealing algorithm and other types of hyper-heuristics. For the test data sets used in the thesis, the new approach solves every instance to over 98% of the upper bound which was obtained via a two-stage relaxation method. The thesis also studies and formulates a deterministic shelf space allocation and inventory model specifically for fresh produce. The model, for the first time, considers the freshness condition as an important factor in influencing a product's demand. Further analysis of the model shows that the search space of the problem can be reduced by decomposing the problem into a nonlinear knapsack problem and a single-item inventory problem that can be solved optimally by a binary search. Several heuristic and meta-heuristic approaches are utilised to optimise the model, including four efficient gradient based constructive heuristics, a multi-start generalised reduced gradient (GRG) algorithm, simulated annealing, a greedy randomised adaptive search procedure (GRASP) and three different types of hyper-heuristics. Experimental results show that the gradient based constructive heuristics are very efficient and all meta-heuristics can only marginally improve on them. Among these meta-heuristics, two simulated annealing based hyper-heuristic performs slightly better than the other meta-heuristic methods. Across all test instances of the three problems, it is shown that the introduction of simulated annealing in the current hyper-heuristics can indeed improve the performance of the algorithms. However, the simulated annealing hyper-heuristic with random heuristic selection generally performs best among all the other meta-heuristics implemented in this thesis. This research is funded by the Engineering and Physical Sciences Research Council (EPSRC) grant reference GR/R60577. Our industrial collaborators include Tesco Retail Vision and SpaceIT Solutions Ltd

    Heuristic algorithms for static and dynamic frequency assignment problems

    Get PDF
    This thesis considers the frequency assignment problem (FAP), which is a real world problem of assigning frequencies to wireless communication connections (also known as requests) while satisfying a set of constraints in order to prevent a loss of signal quality. This problem has many different applications such as mobile phones, TV broadcasting, radio and military operations. In this thesis, two variants of the FAP are considered, namely the static and the dynamic FAPs. The static FAP does not change over time, while the dynamic FAP changes over time as new requests gradually be-come known and frequencies need to be assigned to those requests effectively and promptly. The dynamic FAP has received little attention so far in the literature com-pared with the static FAP. This thesis consists of two parts: the first part discusses and develops three heuristic algorithms, namely tabu search (TS), ant colony optimization (ACO) and hyper heuris-tic (HH), to solve the static FAP. These heuristic algorithms are chosen to represent different characteristics of heuristic algorithms in order to identify an appropriate solu-tion method for this problem. Several novel and existing techniques have been used to improve the performance of these heuristic algorithms. In terms of TS, one of the nov-el techniques aims to determine a lower bound on the number of frequencies that are required from each domain for a feasible solution to exist, based on the underlying graph colouring model. These lower bounds are used to ensure that we never waste time trying to find a feasible solution with a set of frequencies that do not satisfy the lower bounds, since there is no feasible solution in this search area. Another novel technique hybridises TS with multiple neighbourhood structures, one of which is used as a diversification technique. In terms of ACO, the concept of a well-known graph colouring algorithm, namely recursive largest first, is used. Moreover, some of the key factors in producing a high quality ACO implementation are examined such as differ-ent definitions of visibility and trail, and optimization of numerous parameters. In terms of HH, simple and advanced low level heuristics each with an associated inde-pendent tabu list are applied in this study. The lower bound on the number of fre-quencies that are required from each domain for a feasible solution to exist is also used. Based on the experimental results, it is found that the best performing heuristic algo-rithm is TS, with HH also being competitive, whereas ACO achieves poor perfor-mance. Additionally, TS shows competitive performance compared with other algo-rithms in the literature. In the second part of this thesis, various approaches are designed to solve the dynamic FAP. The best heuristic algorithms considered in the first part of this thesis are used to construct these approaches. It is interesting to investigate whether heuristic algorithms which work well on the static FAP also prove efficient on the dynamic FAP. Addi-tionally, several techniques are applied to improve the performance of these approach-es. One of these, called the Gap technique, is novel. This technique aims to identify a good frequency to be assigned to a given request. Based on the experimental results, it is found that the best approach for the dynamic FAP shows competitive results com-pared with other approaches in the literature. Finally, this thesis proposes a novel ap-proach to solve the static FAP by modelling it as a dynamic FAP through dividing this problem into smaller sub-problems, which are then solved in turn in a dynamic process. The lower bound on the number of frequencies that are required from each domain for a feasible solution to exist, based on the underlying graph colouring model, and the Gap technique are also used. The proposed approach shows the ability to improve the results which have been found by the heuristic algorithms in the first part of this thesis (which solve the static FAP as a whole). Moreover, it shows competitive results com-pared with other algorithms in the literature
    corecore