10 research outputs found

    Online Scheduling on Identical Machines using SRPT

    Full text link
    Due to its optimality on a single machine for the problem of minimizing average flow time, Shortest-Remaining-Processing-Time (\srpt) appears to be the most natural algorithm to consider for the problem of minimizing average flow time on multiple identical machines. It is known that \srpt achieves the best possible competitive ratio on multiple machines up to a constant factor. Using resource augmentation, \srpt is known to achieve total flow time at most that of the optimal solution when given machines of speed 2βˆ’1m2- \frac{1}{m}. Further, it is known that \srpt's competitive ratio improves as the speed increases; \srpt is ss-speed 1s\frac{1}{s}-competitive when sβ‰₯2βˆ’1ms \geq 2- \frac{1}{m}. However, a gap has persisted in our understanding of \srpt. Before this work, the performance of \srpt was not known when \srpt is given (1+\eps)-speed when 0 < \eps < 1-\frac{1}{m}, even though it has been thought that \srpt is (1+\eps)-speed O(1)O(1)-competitive for over a decade. Resolving this question was suggested in Open Problem 2.9 from the survey "Online Scheduling" by Pruhs, Sgall, and Torng \cite{PruhsST}, and we answer the question in this paper. We show that \srpt is \emph{scalable} on mm identical machines. That is, we show \srpt is (1+\eps)-speed O(\frac{1}{\eps})-competitive for \eps >0. We complement this by showing that \srpt is (1+\eps)-speed O(\frac{1}{\eps^2})-competitive for the objective of minimizing the β„“k\ell_k-norms of flow time on mm identical machines. Both of our results rely on new potential functions that capture the structure of \srpt. Our results, combined with previous work, show that \srpt is the best possible online algorithm in essentially every aspect when migration is permissible.Comment: Accepted for publication at SODA. This version fixes an error in a preliminary versio

    Online Scheduling on Identical Machines Using SRPT

    Get PDF
    Due to its optimality on a single machine for the problem of minimizing average flow time, Shortest-Remaining-Processing-Time (SRPT) appears to be the most natural algorithm to consider for the problem of minimizing average flow time on multiple identical machines. It is known that SRPT achieves the best possible competitive ratio on multiple machines up to a constant factor. Using resource augmentation, SRPT is known to achieve total flow time at most that of the optimal solution when given machines of speed 2βˆ’1/m2- 1/m. Further, it is known that SRPT's competitive ratio improves as the speed increases; SRPT is ss-speed 1/s1/s-competitive when sβ‰₯2βˆ’1/ms \geq 2 - 1/m. However, a gap has persisted in our understanding of SRPT. Before this work, we did not know the performance of SRPT when given machines of speed 1+\eps for any 0 < \eps < 1 - 1/m. We answer the question in this thesis. We show that SRPT is scalable on mm identical machines. That is, we show SRPT is (1+\eps)-speed O(1/\eps)-competitive for any \eps > 0. We also show that SRPT is (1+\eps)-speed O(1/\eps^2)-competitive for the objective of minimizing the lkl_k norms of flow time on mm identical machines. Both of our results rely on new potential functions that capture the structure of SRPT. Our results, combined with previous work, show that SRPT is the best possible online algorithm in essentially every aspect when migration is permissible

    The Complexity of Scheduling for p-norms of Flow and Stretch

    Full text link
    We consider computing optimal k-norm preemptive schedules of jobs that arrive over time. In particular, we show that computing the optimal k-norm of flow schedule, is strongly NP-hard for k in (0, 1) and integers k in (1, infinity). Further we show that computing the optimal k-norm of stretch schedule, is strongly NP-hard for k in (0, 1) and integers k in (1, infinity).Comment: Conference version accepted to IPCO 201

    Balancing SRPT and FCFS via Starvation Mitigation

    Full text link
    In this paper, we balance two fundamental yet seemingly contradicting job scheduling objectives, namely the average flow time and the maximum flow time. Specifically, Shortest Remaining Processing Time (SRPT) minimizes the average flow time but may lead to job starvation. In contrast, First-Come-First-Served (FCFS) minimizes the maximum flow time but may result in poor average flow time. A natural way to balance these two objectives is to minimize the β„“2\ell_2 norm of flow time. For this problem, no online algorithm is known to achieve a better competitive ratio than SRPT and FCFS. It can be argued that SRPT and FCFS complement each other. To exploit this complementary relationship, we mitigate the starvation caused by SRPT with the help of FCFS. Specifically, when there are starving jobs, we process the job that becomes starving first. The main question is: when should a job be viewed as starving? If the timing is too early or too late, then the algorithm still behaves like FCFS or SRPT, respectively. In this paper, we answer the above question by estimating the number of jobs. Our algorithm significantly improves upon SRPT and FCFS in terms of the competitive ratio for minimizing the β„“2\ell_2 norm of flow time, even if the estimate is loose.Comment: 1. Introduction is rewritten. 2. Add Theorem 1.4 and numerical study. 3. The proposed algorithm and the proof of Theorem 1.5 are simplifie

    Online Scheduling on Identical Machines using SRPT

    No full text

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler
    corecore