2,587 research outputs found

    Solving Challenging Real-World Scheduling Problems

    Get PDF
    This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.Siirretty Doriast

    Staffing problems and symmetric integer programs

    Get PDF
    Issued as Final project report, Project no. E-24-63

    A proxy for reliable 5G (and beyond) mmWave communications. Contributions to multi-path scheduling for a reliability focused mmWave proxy

    Get PDF
    Reliable, consistent and very high data rate mobile communication will become especially important for future services such as, among other things, future emergency communication needs. MmWave technology provides the needed capacity, however, lacks the reliability due to the abrupt capacity changes any one path experiences. Intelligently making use of varying numbers of available mmWave paths, efficiently scheduling data across the paths, perhaps even through multi-operator agreements; and balancing mobile power consumption with path costs and the need for reliable consistent quality will be critical to attaining this aim. In this thesis, the multipath scheduling problem in a mmWave proxy when the paths have dynamically changing path characteristics is considered. To address this problem, a hybrid scheduler is proposed, the performance of which is compared with the Round Robin scheduler, Random scheduler and the Highest Capacity First scheduler. Forward error correction is explored as a means of enhancing the scheduling. Keywords:Multipath Scheduling, mmWave Proxy, Forward Error Correction, beyond 5G

    An assessment of a days off decomposition approach to personnel scheduling

    Get PDF
    This paper studies a two-phase decomposition approach to solve the personnel scheduling problem. The first phase creates a days off schedule, indicating working days and days off for each employee. The second phase assigns shifts to the working days in the days off schedule. This decomposition is motivated by the fact that personnel scheduling constraints are often divided in two categories: one specifies constraints on working days and days off, while the other specifies constraints on shift assignments. To assess the consequences of the decomposition approach, we apply it to public benchmark instances, and compare this to solving the personnel scheduling problem directly. In all steps we use mathematical programming. We also study the extension that includes night shifts in thefirst phase of the decomposition. We present a detailed results analysis, and analyze the effect of various instance parameters on the decompositions' results. In general, we observe that the decompositions significantly reduce the computation time, and that they produce good solutions for most instances

    Scheduling theory since 1981: an annotated bibliography

    Get PDF

    An Algorithmic approach to shift structure optimization

    Get PDF
    Workforce scheduling in organizations often consists of three major phases: workload prediction, shift generation, and staff rostering. Workload prediction involves using historical behaviour of e.g. customers to predict future demand for work. Shift generation is the process of transforming the determined workload into shifts as accurately as possible. In staff rostering, the generated shifts are assigned to employees. In general the problem and even its subproblems are NP-hard, which makes them highly challenging for organizations to solve. Heuristic optimization methods can be used to solve practical instances within reasonable running times, which in turn can result in e.g. improved revenue, improved service, or more satisfied employees for the organizations. This thesis presents some specific subproblems along with practical solution methods--- Työvoiman aikataulutusprosessi koostuu kolmesta päävaiheesta: työtarpeen ennustaminen, työvuorojen muodostus ja työvuorojen miehitys. Tulevaa työtarvetta ennustetaan pääasiassa menneisyyden asiakaskäytöksen perusteella käyttäen esimerkiksi tilastollisia malleja tai koneoppimiseen perustuvia menetelmiä. Työvuorojen muodostuksessa tehdään työvuororakenne, joka noudattaa ennustettua ja ennalta tiedettyä työtarvetta mahdollisimman tarkasti. Työvuorojen miehityksessä määritetään työvuoroille tekijät. Jokainen vaihe itsessään on haasteellinen ratkaistava. Erityisesti työvuorojen miehitys on yleensä NP-kova ongelma. On kuitenkin mahdollista tuottaa käytännöllisiä ratkaisuja järkevässä ajassa käyttäen heuristisia optimointimenetelmiä. Näin on saavutettavissa mitattavia hyötyjä mm. tuottoon, asiakkaiden palvelutasoon sekä työntekijöiden työtyyväisyyteen. Tässä väitöskirjassa esitellään eräitä työvoiman aikataulutuksen aliongelmia sekä niihin sopivia ratkaisumenetelmiä

    A novel population-based local search for nurse rostering problem

    Get PDF
    Population-based approaches regularly are better than single based (local search) approaches in exploring the search space. However, the drawback of population-based approaches is in exploiting the search space. Several hybrid approaches have proven their efficiency through different domains of optimization problems by incorporating and integrating the strength of population and local search approaches. Meanwhile, hybrid methods have a drawback of increasing the parameter tuning. Recently, population-based local search was proposed for a university course-timetabling problem with fewer parameters than existing approaches, the proposed approach proves its effectiveness. The proposed approach employs two operators to intensify and diversify the search space. The first operator is applied to a single solution, while the second is applied for all solutions. This paper aims to investigate the performance of population-based local search for the nurse rostering problem. The INRC2010 database with a dataset composed of 69 instances is used to test the performance of PB-LS. A comparison was made between the performance of PB-LS and other existing approaches in the literature. Results show good performances of proposed approach compared to other approaches, where population-based local search provided best results in 55 cases over 69 instances used in experiments

    An accurate analysis for guaranteed performance of multiprocessor streaming applications

    Get PDF
    Already for more than a decade, consumer electronic devices have been available for entertainment, educational, or telecommunication tasks based on multimedia streaming applications, i.e., applications that process streams of audio and video samples in digital form. Multimedia capabilities are expected to become more and more commonplace in portable devices. This leads to challenges with respect to cost efficiency and quality. This thesis contributes models and analysis techniques for improving the cost efficiency, and therefore also the quality, of multimedia devices. Portable consumer electronic devices should feature flexible functionality on the one hand and low power consumption on the other hand. Those two requirements are conflicting. Therefore, we focus on a class of hardware that represents a good trade-off between those two requirements, namely on domain-specific multiprocessor systems-on-chip (MP-SoC). Our research work contributes to dynamic (i.e., run-time) optimization of MP-SoC system metrics. The central question in this area is how to ensure that real-time constraints are satisfied and the metric of interest such as perceived multimedia quality or power consumption is optimized. In these cases, we speak of quality-of-service (QoS) and power management, respectively. In this thesis, we pursue real-time constraint satisfaction that is guaranteed by the system by construction and proven mainly based on analytical reasoning. That approach is often taken in real-time systems to ensure reliable performance. Therefore the performance analysis has to be conservative, i.e. it has to use pessimistic assumptions on the unknown conditions that can negatively influence the system performance. We adopt this hypothesis as the foundation of this work. Therefore, the subject of this thesis is the analysis of guaranteed performance for multimedia applications running on multiprocessors. It is very important to note that our conservative approach is essentially different from considering only the worst-case state of the system. Unlike the worst-case approach, our approach is dynamic, i.e. it makes use of run-time characteristics of the input data and the environment of the application. The main purpose of our performance analysis method is to guide the run-time optimization. Typically, a resource or quality manager predicts the execution time, i.e., the time it takes the system to process a certain number of input data samples. When the execution times get smaller, due to dependency of the execution time on the input data, the manager can switch the control parameter for the metric of interest such that the metric improves but the system gets slower. For power optimization, that means switching to a low-power mode. If execution times grow, the manager can set parameters so that the system gets faster. For QoS management, for example, the application can be switched to a different quality mode with some degradation in perceived quality. The real-time constraints are then never violated and the metrics of interest are kept as good as possible. Unfortunately, maintaining system metrics such as power and quality at the optimal level contradicts with our main requirement, i.e., providing performance guarantees, because for this one has to give up some quality or power consumption. Therefore, the performance analysis approach developed in this thesis is not only conservative, but also accurate, so that the optimization of the metric of interest does not suffer too much from conservativity. This is not trivial to realize when two factors are combined: parallel execution on multiple processors and dynamic variation of the data-dependent execution delays. We achieve the goal of conservative and accurate performance estimation for an important class of multiprocessor platforms and multimedia applications. Our performance analysis technique is realizable in practice in QoS or power management setups. We consider a generic MP-SoC platform that runs a dynamic set of applications, each application possibly using multiple processors. We assume that the applications are independent, although it is possible to relax this requirement in the future. To support real-time constraints, we require that the platform can provide guaranteed computation, communication and memory budgets for applications. Following important trends in system-on-chip communication, we support both global buses and networks-on-chip. We represent every application as a homogeneous synchronous dataflow (HSDF) graph, where the application tasks are modeled as graph nodes, called actors. We allow dynamic datadependent actor execution delays, which makes HSDF graphs very useful to express modern streaming applications. Our reason to consider HSDF graphs is that they provide a good basic foundation for analytical performance estimation. In this setup, this thesis provides three major contributions: 1. Given an application mapped to an MP-SoC platform, given the performance guarantees for the individual computation units (the processors) and the communication unit (the network-on-chip), and given constant actor execution delays, we derive the throughput and the execution time of the system as a whole. 2. Given a mapped application and platform performance guarantees as in the previous item, we extend our approach for constant actor execution delays to dynamic datadependent actor delays. 3. We propose a global implementation trajectory that starts from the application specification and goes through design-time and run-time phases. It uses an extension of the HSDF model of computation to reflect the design decisions made along the trajectory. We present our model and trajectory not only to put the first two contributions into the right context, but also to present our vision on different parts of the trajectory, to make a complete and consistent story. Our first contribution uses the idea of so-called IPC (inter-processor communication) graphs known from the literature, whereby a single model of computation (i.e., HSDF graphs) are used to model not only the computation units, but also the communication unit (the global bus or the network-on-chip) and the FIFO (first-in-first-out) buffers that form a ‘glue’ between the computation and communication units. We were the first to propose HSDF graph structures for modeling bounded FIFO buffers and guaranteed throughput network connections for the network-on-chip communication in MP-SoCs. As a result, our HSDF models enable the formalization of the on-chip FIFO buffer capacity minimization problem under a throughput constraint as a graph-theoretic problem. Using HSDF graphs to formalize that problem helps to find the performance bottlenecks in a given solution to this problem and to improve this solution. To demonstrate this, we use the JPEG decoder application case study. Also, we show that, assuming constant – worst-case for the given JPEG image – actor delays, we can predict execution times of JPEG decoding on two processors with an accuracy of 21%. Our second contribution is based on an extension of the scenario approach. This approach is based on the observation that the dynamic behavior of an application is typically composed of a limited number of sub-behaviors, i.e., scenarios, that have similar resource requirements, i.e., similar actor execution delays in the context of this thesis. The previous work on scenarios treats only single-processor applications or multiprocessor applications that do not exploit all the flexibility of the HSDF model of computation. We develop new scenario-based techniques in the context of HSDF graphs, to derive the timing overlap between different scenarios, which is very important to achieve good accuracy for general HSDF graphs executing on multiprocessors. We exploit this idea in an application case study – the MPEG-4 arbitrarily-shaped video decoder, and demonstrate execution time prediction with an average accuracy of 11%. To the best of our knowledge, for the given setup, no other existing performance technique can provide a comparable accuracy and at the same time performance guarantees
    corecore