5 research outputs found

    Evolutionary methods for the design of dispatching rules for complex and dynamic scheduling problems

    Get PDF
    Three methods, based on Evolutionary Algorithms (EAs), to support and automate the design of dispatching rules for complex and dynamic scheduling problems are proposed in this thesis. The first method employs an EA to search for problem instances on which a given dispatching rule performs badly. These instances can then be analysed to reveal weaknesses of the tested rule, thereby providing guidelines for the design of a better rule. The other two methods are hyper-heuristics, which employ an EA directly to generate effective dispatching rules. In particular, one hyper-heuristic is based on a specific type of EA, called Genetic Programming (GP), and generates a single rule from basic job and machine attributes, while the other generates a set of work centre-specific rules by selecting a (potentially) different rule for each work centre from a number of existing rules. Each of the three methods is applied to some complex and dynamic scheduling problem(s), and the resulting dispatching rules are tested against benchmark rules from the literature. In each case, the benchmark rules are shown to be outperformed by a rule (set) that results from the application of the respective method, which demonstrates the effectiveness of the proposed methods

    Bayesian optimisation with multi-task Gaussian processes

    Get PDF
    Gaussian processes are simple efficient regression models that allows a user to encode abstract prior beliefs such as smoothness or periodicity and provide predictions with uncertainty estimates. Multi-Task Gaussian processes extend these methods to model functions with multiple outputs or functions over joint continuous and categorical domains. Using a Gaussian process as a surrogate model of an expensive function to guide the search to find the peak is the field of Bayesian optimisation. Within this field, Knowledge Gradient is an effective family of methods based on a simple Value of Information derivation yet there are many problems to which it hasn’t been applied. We consider a variety of problems and derive new algorithms using the same Value of Information framework yielding significant improvements over many previous methods. We first propose the Regional Expected Value of Improvement (REVI) method for learning the best of a set of candidate solutions for each point in a domain where the best solution varies across the domain. For example, the best from a set of treatments varies across the domain of patients. We next generalize this method to optimising a range of continuous global optimization problems, multitask conditional global optimization, querying one objective/task can inform the optimisation of other tasks. We then follow with a natural extension of KG to the optimization of functions that are an average over tasks that the user aims to maximise. Finally, we cast simulation optimization with common random numbers as optimization of an infinite summation of tasks where each task is the objective with a single random number seed. We therefore propose the Knowledge Gradient for Common Random Numbers that sequentially determines a seed and a solution to optimise the unobservable infinite average over seeds
    corecore