189 research outputs found

    A guided search non-dominated sorting genetic algorithm for the multi-objective university course timetabling problem

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2011.The university course timetabling problem is a typical combinatorial optimization problem. This paper tackles the multi-objective university course timetabling problem (MOUCTP) and proposes a guided search non-dominated sorting genetic algorithm to solve the MOUCTP. The proposed algorithm integrates a guided search technique, which uses a memory to store useful information extracted from previous good solutions to guide the generation of new solutions, and two local search schemes to enhance its performance for the MOUCTP. The experimental results based on a set of test problems show that the proposed algorithm is efficient for solving the MOUCTP

    Improved Squeaky Wheel Optimisation for Driver Scheduling

    Get PDF
    This paper presents a technique called Improved Squeaky Wheel Optimisation for driver scheduling problems. It improves the original Squeaky Wheel Optimisations effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported

    A Classification of Hyper-heuristic Approaches

    Get PDF
    The current state of the art in hyper-heuristic research comprises a set of approaches that share the common goal of automating the design and adaptation of heuristic methods to solve hard computational search problems. The main goal is to produce more generally applicable search methodologies. In this chapter we present and overview of previous categorisations of hyper-heuristics and provide a unified classification and definition which captures the work that is being undertaken in this field. We distinguish between two main hyper-heuristic categories: heuristic selection and heuristic generation. Some representative examples of each category are discussed in detail. Our goal is to both clarify the main features of existing techniques and to suggest new directions for hyper-heuristic research

    Discovering beneficial cooperative structures for the automatic construction of heuristics

    Get PDF
    The current research trends on hyper-heuristics design have sprung up in two different flavours: heuristics that choose heuristics and heuristics that generate heuristics. In the latter, the goal is to develop a problem-domain independent strategy to automatically generate a good performing heuristic for specific problems, that is, the input to the algorithm are problems and the output are problem-tailored heuristics. This can be done, for example, by automatically selecting and combining different low-level heuristics into a problemspecific and effective strategy. Thus, hyper-heuristics raise the level of generality on automated problem solving by attempting to select and/or generate tailored heuristics for the problem in hand. Some approaches like genetic programming have been proposed for this. In this paper, we report on an alternative methodology that sheds light on simple methodologies that efficiently cooperate by means of local interactions. These entities are seen as building blocks, the combination of which is employed for the automated manufacture of good performing heuristic search strategies.We present proof-of-concept results of applying this methodology to instances of the well-known symmetric TSP. The goal here is to demonstrate feasibility rather than compete with state of the art TSP solvers. This TSP is chosen only because it is an easy to state and well known problem

    Exact/heuristic hybrids using rVNS and hyperheuristics for workforce scheduling

    Get PDF
    In this paper we study a complex real-world workforce scheduling problem. We propose a method of splitting the problem into smaller parts and solving each part using exhaustive search. These smaller parts comprise a combination of choosing a method to select a task to be scheduled and a method to allocate resources, including time, to the selected task. We use reduced Variable Neighbourhood Search (rVNS) and hyperheuristic approaches to decide which sub problems to tackle. The resulting methods are compared to local search and Genetic Algorithm approaches. Parallelisation is used to perform nearly one CPU-year of experiments. The results show that the new methods can produce results fitter than the Genetic Algorithm in less time and that they are far superior to any of their component techniques. The method used to split up the problem is generalisable and could be applied to a wide range of optimisation problems

    Adult lifetime cost of hemophilia B management in the US: payer and societal perspectives from a decision analytic model

    Get PDF
    Aims Hemophilia B (HB) is a rare congenital disorder characterized by bleeding-related complications which are managed by prophylactic or post-bleeding event (“on-demand”) replacement of clotting factor IX (FIX). The standard of care for severe HB is life-long prophylaxis with standard half-life (SHL) or extended half-life (EHL) products given every 2–3 or 7–14 days, respectively. FIX treatment costs in the US have been investigated, but the lifetime costs of HB treatment have not been well characterized, particularly related to the impact of joint health deterioration and associated health resource utilization. We developed a decision-analytic model to explore outcomes, costs and underlying cost drivers associated with FIX treatment options over the lifetime of an adult with severe or moderately severe HB. Materials and methods With participation from clinicians, health technology assessment specialists and patient advocates, a Markov model was constructed to estimate bleeding events and costs associated with health states including “bleed into joint”, “bleed not into joint”, “no bleed” and “death”. Sub-models of joint health were based on 0, 1, or ≥2 areas of chronic joint damage. US third-party payer and societal perspectives were considered with a lifetime horizon; sensitivity analyses tested the robustness of primary findings. Results Total adult lifetime costs per patient with severe and moderately severe HB were 21,086,607forSHLFIXprophylaxis,21,086,607 for SHL FIX prophylaxis, 22,987,483 for EHL FIX prophylaxis, and $20,971,826 for on-demand FIX treatment. For FIX prophylaxis, the cost of FIX treatment accounts for >90% of the total HB treatment costs. Conclusions This decision analytic model demonstrated significant economic burden associated with the current HB treatment paradigm

    Investigating a Hybrid Metaheuristic For Job Shop Rescheduling

    Get PDF
    Previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. The main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. The building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (Phase I). Each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). We further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (Phase II). We use ten fixed solutions as our target and measure how well we cover these specific scenarios
    • …
    corecore