20,862 research outputs found

    Malleable Scheduling Beyond Identical Machines

    Get PDF
    In malleable job scheduling, jobs can be executed simultaneously on multiple machines with the processing time depending on the number of allocated machines. Jobs are required to be executed non-preemptively and in unison, in the sense that they occupy, during their execution, the same time interval over all the machines of the allocated set. In this work, we study generalizations of malleable job scheduling inspired by standard scheduling on unrelated machines. Specifically, we introduce a general model of malleable job scheduling, where each machine has a (possibly different) speed for each job, and the processing time of a job j on a set of allocated machines S depends on the total speed of S for j. For machines with unrelated speeds, we show that the optimal makespan cannot be approximated within a factor less than e/(e-1), unless P = NP. On the positive side, we present polynomial-time algorithms with approximation ratios 2e/(e-1) for machines with unrelated speeds, 3 for machines with uniform speeds, and 7/3 for restricted assignments on identical machines. Our algorithms are based on deterministic LP rounding and result in sparse schedules, in the sense that each machine shares at most one job with other machines. We also prove lower bounds on the integrality gap of 1+phi for unrelated speeds (phi is the golden ratio) and 2 for uniform speeds and restricted assignments. To indicate the generality of our approach, we show that it also yields constant factor approximation algorithms (i) for minimizing the sum of weighted completion times; and (ii) a variant where we determine the effective speed of a set of allocated machines based on the L_p norm of their speeds

    Framework for sustainable TVET-Teacher Education Program in Malaysia Public Universities

    Get PDF
    Studies had stated that less attention was given to the education aspect, such as teaching and learning in planning for improving the TVET system. Due to the 21st Century context, the current paradigm of teaching for the TVET educators also has been reported to be fatal and need to be shifted. All these disadvantages reported hindering the country from achieving the 5th strategy in the Strategic Plan for Vocational Education Transformation to transform TVET system as a whole. Therefore, this study aims to develop a framework for sustainable TVET Teacher Education program in Malaysia. This study had adopted an Exploratory Sequential Mix-Method design, which involves a semi-structured interview (phase one) and survey method (phase two). Nine experts had involved in phase one chosen by using Purposive Sampling Technique. As in phase two, 118 TVET-TE program lecturers were selected as the survey sample chosen through random sampling method. After data analysis in phase one (thematic analysis) and phase two (Principal Component Analysis), eight domains and 22 elements have been identified for the framework for sustainable TVET-TE program in Malaysia. This framework was identified to embed the elements of 21st Century Education, thus filling the gap in this research. The research findings also indicate that the developed framework was unidimensional and valid for the development and research regarding TVET-TE program in Malaysia. Lastly, it is in the hope that this research can be a guide for the nations in producing a quality TVET teacher in the future

    Co-Scheduling Algorithms for High-Throughput Workload Execution

    Get PDF
    This paper investigates co-scheduling algorithms for processing a set of parallel applications. Instead of executing each application one by one, using a maximum degree of parallelism for each of them, we aim at scheduling several applications concurrently. We partition the original application set into a series of packs, which are executed one by one. A pack comprises several applications, each of them with an assigned number of processors, with the constraint that the total number of processors assigned within a pack does not exceed the maximum number of available processors. The objective is to determine a partition into packs, and an assignment of processors to applications, that minimize the sum of the execution times of the packs. We thoroughly study the complexity of this optimization problem, and propose several heuristics that exhibit very good performance on a variety of workloads, whose application execution times model profiles of parallel scientific codes. We show that co-scheduling leads to to faster workload completion time and to faster response times on average (hence increasing system throughput and saving energy), for significant benefits over traditional scheduling from both the user and system perspectives

    The Lazy Bureaucrat Scheduling Problem

    Full text link
    We introduce a new class of scheduling problems in which the optimization is performed by the worker (single ``machine'') who performs the tasks. A typical worker's objective is to minimize the amount of work he does (he is ``lazy''), or more generally, to schedule as inefficiently (in some sense) as possible. The worker is subject to the constraint that he must be busy when there is work that he can do; we make this notion precise both in the preemptive and nonpreemptive settings. The resulting class of ``perverse'' scheduling problems, which we denote ``Lazy Bureaucrat Problems,'' gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules.Comment: 19 pages, 2 figures, Latex. To appear, Information and Computatio

    A Fully Polynomial-Time Approximation Scheme for Speed Scaling with Sleep State

    Full text link
    We study classical deadline-based preemptive scheduling of tasks in a computing environment equipped with both dynamic speed scaling and sleep state capabilities: Each task is specified by a release time, a deadline and a processing volume, and has to be scheduled on a single, speed-scalable processor that is supplied with a sleep state. In the sleep state, the processor consumes no energy, but a constant wake-up cost is required to transition back to the active state. In contrast to speed scaling alone, the addition of a sleep state makes it sometimes beneficial to accelerate the processing of tasks in order to transition the processor to the sleep state for longer amounts of time and incur further energy savings. The goal is to output a feasible schedule that minimizes the energy consumption. Since the introduction of the problem by Irani et al. [16], its exact computational complexity has been repeatedly posed as an open question (see e.g. [2,8,15]). The currently best known upper and lower bounds are a 4/3-approximation algorithm and NP-hardness due to [2] and [2,17], respectively. We close the aforementioned gap between the upper and lower bound on the computational complexity of speed scaling with sleep state by presenting a fully polynomial-time approximation scheme for the problem. The scheme is based on a transformation to a non-preemptive variant of the problem, and a discretization that exploits a carefully defined lexicographical ordering among schedules

    New bounds for truthful scheduling on two unrelated selfish machines

    Full text link
    We consider the minimum makespan problem for nn tasks and two unrelated parallel selfish machines. Let RnR_n be the best approximation ratio of randomized monotone scale-free algorithms. This class contains the most efficient algorithms known for truthful scheduling on two machines. We propose a new Min−MaxMin-Max formulation for RnR_n, as well as upper and lower bounds on RnR_n based on this formulation. For the lower bound, we exploit pointwise approximations of cumulative distribution functions (CDFs). For the upper bound, we construct randomized algorithms using distributions with piecewise rational CDFs. Our method improves upon the existing bounds on RnR_n for small nn. In particular, we obtain almost tight bounds for n=2n=2 showing that ∣R2−1.505996∣<10−6|R_2-1.505996|<10^{-6}.Comment: 28 pages, 3 tables, 1 figure. Theory Comput Syst (2019

    Energy Efficient Scheduling of MapReduce Jobs

    Full text link
    MapReduce is emerged as a prominent programming model for data-intensive computation. In this work, we study power-aware MapReduce scheduling in the speed scaling setting first introduced by Yao et al. [FOCS 1995]. We focus on the minimization of the total weighted completion time of a set of MapReduce jobs under a given budget of energy. Using a linear programming relaxation of our problem, we derive a polynomial time constant-factor approximation algorithm. We also propose a convex programming formulation that we combine with standard list scheduling policies, and we evaluate their performance using simulations.Comment: 22 page

    Scheduling Monotone Moldable Jobs in Linear Time

    Full text link
    A moldable job is a job that can be executed on an arbitrary number of processors, and whose processing time depends on the number of processors allotted to it. A moldable job is monotone if its work doesn't decrease for an increasing number of allotted processors. We consider the problem of scheduling monotone moldable jobs to minimize the makespan. We argue that for certain compact input encodings a polynomial algorithm has a running time polynomial in n and log(m), where n is the number of jobs and m is the number of machines. We describe how monotony of jobs can be used to counteract the increased problem complexity that arises from compact encodings, and give tight bounds on the approximability of the problem with compact encoding: it is NP-hard to solve optimally, but admits a PTAS. The main focus of this work are efficient approximation algorithms. We describe different techniques to exploit the monotony of the jobs for better running times, and present a (3/2+{\epsilon})-approximate algorithm whose running time is polynomial in log(m) and 1/{\epsilon}, and only linear in the number n of jobs
    • …
    corecore