886 research outputs found

    Towards Optimality in Parallel Scheduling

    Full text link
    To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system. We ask how the system should schedule jobs across cores so as to minimize the mean response time over a stream of incoming jobs. To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. EQUI requires jobs to change their level of parallelization while they run. Since this is not possible for all workloads, we consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, it is possible to achieve EQUI's performance without requiring jobs to change their levels of parallelization by using the optimal fixed level of parallelization, k*. We also show how to analytically derive the optimal k* as a function of the system load, the speedup curve, and the job size distribution. In the case where jobs may follow different speedup curves, finding a good scheduling policy is even more challenging. We find that policies like EQUI which performed well in the case of a single speedup function now perform poorly. We propose a very simple policy, GREEDY*, which performs near-optimally when compared to the numerically-derived optimal policy

    Heavy traffic analysis of a polling model with retrials and glue periods

    Full text link
    We present a heavy traffic analysis of a single-server polling model, with the special features of retrials and glue periods. The combination of these features in a polling model typically occurs in certain optical networking models, and in models where customers have a reservation period just before their service period. Just before the server arrives at a station there is some deterministic glue period. Customers (both new arrivals and retrials) arriving at the station during this glue period will be served during the visit of the server. Customers arriving in any other period leave immediately and will retry after an exponentially distributed time. As this model defies a closed-form expression for the queue length distributions, our main focus is on their heavy-traffic asymptotics, both at embedded time points (beginnings of glue periods, visit periods and switch periods) and at arbitrary time points. We obtain closed-form expressions for the limiting scaled joint queue length distribution in heavy traffic and use these to accurately approximate the mean number of customers in the system under different loads.Comment: 23 pages, 2 figure

    Leadership and innovation in the public sector

    Get PDF

    Foreword

    Get PDF

    Wie of wat gelooft u? De status van kennis in de sociale media

    Get PDF
    De opkomst en daarmee het belang van sociale media kan niet als iets ‘nieuws’ beschouwd worden. Zeker als we ‘nieuw’ definiëren in termen van gisteren, vandaag of morgen. Sociale media zijn sinds de jaren 2000 een steeds groter en populairder platform geworden. Ook het openbaar bestuur probeert de laatste jaren aan te haken bij deze ontwikkeling en vooral de mogelijkheden van sociale media te benutten. Een betrekkelijk nieuw en belangrijk vraagstuk als gevolg van deze activiteiten op sociale media wordt nog wel eens vergeten en onderschat

    Leadership and innovation in the public sector

    Get PDF

    A queueing-theoretic analysis of the threshold-based exhaustive data-backup scheduling policy

    Get PDF
    We analyse the threshold-based exhaustive data backup scheduling mechanism by means of a queueing-theoretic approach. Data packets that have not yet been backed up are modelled by customers waiting for service (back-up). We obtain the probability generating function of the system content (backlog size) at random slot boundaries in steady state
    • …
    corecore