2,113 research outputs found

    Malleable Scheduling Beyond Identical Machines

    Get PDF
    In malleable job scheduling, jobs can be executed simultaneously on multiple machines with the processing time depending on the number of allocated machines. Jobs are required to be executed non-preemptively and in unison, in the sense that they occupy, during their execution, the same time interval over all the machines of the allocated set. In this work, we study generalizations of malleable job scheduling inspired by standard scheduling on unrelated machines. Specifically, we introduce a general model of malleable job scheduling, where each machine has a (possibly different) speed for each job, and the processing time of a job j on a set of allocated machines S depends on the total speed of S for j. For machines with unrelated speeds, we show that the optimal makespan cannot be approximated within a factor less than e/(e-1), unless P = NP. On the positive side, we present polynomial-time algorithms with approximation ratios 2e/(e-1) for machines with unrelated speeds, 3 for machines with uniform speeds, and 7/3 for restricted assignments on identical machines. Our algorithms are based on deterministic LP rounding and result in sparse schedules, in the sense that each machine shares at most one job with other machines. We also prove lower bounds on the integrality gap of 1+phi for unrelated speeds (phi is the golden ratio) and 2 for uniform speeds and restricted assignments. To indicate the generality of our approach, we show that it also yields constant factor approximation algorithms (i) for minimizing the sum of weighted completion times; and (ii) a variant where we determine the effective speed of a set of allocated machines based on the L_p norm of their speeds

    k2U: A General Framework from k-Point Effective Schedulability Analysis to Utilization-Based Tests

    Full text link
    To deal with a large variety of workloads in different application domains in real-time embedded systems, a number of expressive task models have been developed. For each individual task model, researchers tend to develop different types of techniques for deriving schedulability tests with different computation complexity and performance. In this paper, we present a general schedulability analysis framework, namely the k2U framework, that can be potentially applied to analyze a large set of real-time task models under any fixed-priority scheduling algorithm, on both uniprocessor and multiprocessor scheduling. The key to k2U is a k-point effective schedulability test, which can be viewed as a "blackbox" interface. For any task model, if a corresponding k-point effective schedulability test can be constructed, then a sufficient utilization-based test can be automatically derived. We show the generality of k2U by applying it to different task models, which results in new and improved tests compared to the state-of-the-art. Analogously, a similar concept by testing only k points with a different formulation has been studied by us in another framework, called k2Q, which provides quadratic bounds or utilization bounds based on a different formulation of schedulability test. With the quadratic and hyperbolic forms, k2Q and k2U frameworks can be used to provide many quantitive features to be measured, like the total utilization bounds, speed-up factors, etc., not only for uniprocessor scheduling but also for multiprocessor scheduling. These frameworks can be viewed as a "blackbox" interface for schedulability tests and response-time analysis

    08071 Abstracts Collection -- Scheduling

    Get PDF
    From 10.02. to 15.02., the Dagstuhl Seminar 08071 ``Scheduling\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Feasibility Tests for Recurrent Real-Time Tasks in the Sporadic DAG Model

    Full text link
    A model has been proposed in [Baruah et al., in Proceedings of the IEEE Real-Time Systems Symposium 2012] for representing recurrent precedence-constrained tasks to be executed on multiprocessor platforms, where each recurrent task is modeled by a directed acyclic graph (DAG), a period, and a relative deadline. Each vertex of the DAG represents a sequential job, while the edges of the DAG represent precedence constraints between these jobs. All the jobs of the DAG are released simultaneously and have to be completed within some specified relative deadline. The task may release jobs in this manner an unbounded number of times, with successive releases occurring at least the specified period apart. The feasibility problem is to determine whether such a recurrent task can be scheduled to always meet all deadlines on a specified number of dedicated processors. The case of a single task has been considered in [Baruah et al., 2012]. The main contribution of this paper is to consider the case of multiple tasks. We show that EDF has a speedup bound of 2-1/m, where m is the number of processors. Moreover, we present polynomial and pseudopolynomial schedulability tests, of differing effectiveness, for determining whether a set of sporadic DAG tasks can be scheduled by EDF to meet all deadlines on a specified number of processors

    Performance Guarantees of Local Search for Multiprocessor Scheduling

    Get PDF
    Increasing interest has recently been shown in analyzing the worst-case behavior of local search algorithms. In particular, the quality of local optima and the time needed to find the local optima by the simplest form of local search has been studied. This paper deals with worst-case performance of local search algorithms for makespan minimization on parallel machines. We analyze the quality of the local optima obtained by iterative improvement over the jump, swap, multi-exchange, and the newly defined push neighborhoods. Finally, for the jump neighborhood we provide bounds on the number of local search steps required to find a local optimum.operations research and management science;
    • ā€¦
    corecore