4,676 research outputs found
Data Structures for Task-based Priority Scheduling
Many task-parallel applications can benefit from attempting to execute tasks
in a specific order, as for instance indicated by priorities associated with
the tasks. We present three lock-free data structures for priority scheduling
with different trade-offs on scalability and ordering guarantees. First we
propose a basic extension to work-stealing that provides good scalability, but
cannot provide any guarantees for task-ordering in-between threads. Next, we
present a centralized priority data structure based on -fifo queues, which
provides strong (but still relaxed with regard to a sequential specification)
guarantees. The parameter allows to dynamically configure the trade-off
between scalability and the required ordering guarantee. Third, and finally, we
combine both data structures into a hybrid, -priority data structure, which
provides scalability similar to the work-stealing based approach for larger
, while giving strong ordering guarantees for smaller . We argue for
using the hybrid data structure as the best compromise for generic,
priority-based task-scheduling.
We analyze the behavior and trade-offs of our data structures in the context
of a simple parallelization of Dijkstra's single-source shortest path
algorithm. Our theoretical analysis and simulations show that both the
centralized and the hybrid -priority based data structures can give strong
guarantees on the useful work performed by the parallel Dijkstra algorithm. We
support our results with experimental evidence on an 80-core Intel Xeon system
The Power of Choice in Priority Scheduling
Consider the following random process: we are given queues, into which
elements of increasing labels are inserted uniformly at random. To remove an
element, we pick two queues at random, and remove the element of lower label
(higher priority) among the two. The cost of a removal is the rank of the label
removed, among labels still present in any of the queues, that is, the distance
from the optimal choice at each step. Variants of this strategy are prevalent
in state-of-the-art concurrent priority queue implementations. Nonetheless, it
is not known whether such implementations provide any rank guarantees, even in
a sequential model.
We answer this question, showing that this strategy provides surprisingly
strong guarantees: Although the single-choice process, where we always insert
and remove from a single randomly chosen queue, has degrading cost, going to
infinity as we increase the number of steps, in the two choice process, the
expected rank of a removed element is while the expected worst-case
cost is . These bounds are tight, and hold irrespective of the
number of steps for which we run the process.
The argument is based on a new technical connection between "heavily loaded"
balls-into-bins processes and priority scheduling.
Our analytic results inspire a new concurrent priority queue implementation,
which improves upon the state of the art in terms of practical performance
Controlling delay differentiation with priority jumps: analytical study
Supporting different services with different Quality of Service (QoS) requirements is not an easy task in modern telecommunication systems: an efficient priority scheduling discipline is of great importance. Fixed or static priority achieves maximal delay differentiation between different types of traffic, but may have a too severe impact on the performance of lower-priority traffic. In this paper, we propose a priority scheduling discipline with priority jumps to control the delay differentiation. In this scheduling discipline, packets can be promoted to a higher priority level in the course of time. We use probability generating functions to study the queueing system analytically. Some interesting mathematical challenges thereby arise. With some numerical examples, we finally show the impact of the priority jumps and of the system parameters
A distributed Real-Time Java system based on CSP
CSP is a fundamental concept for developing software for distributed real time systems. The CSP paradigm constitutes a natural addition to object orientation and offers higher order multithreading constructs. The CSP channel concept that has been implemented in Java deals with single- and multi-processor environments and also takes care of the real time priority scheduling requirements. For this, the notion of priority and scheduling has been carefully examined and as a result it was reasoned that priority scheduling should be attached to the communicating channels rather than to the processes. In association with channels, a priority based parallel construct is developed for composing processes: hiding threads and priority indexing from the user. This approach simplifies the use of priorities for the object oriented paradigm. Moreover, in the proposed system, the notion of scheduling is no longer connected to the operating system but has become part of the application instead
Effect of class clustering on delay differentiation in priority scheduling
Priority scheduling is the most viable way to implement QoS differentiation in telecommunication networks. Most studies on priority scheduling do not take into account possible class clustering. In particular, they assume that different classes occur randomly and independently in the arrival stream of packets. In reality, however, packets of the same class may have the tendency to arrive in clusters. By using existing results, it is shown that class clustering may have a severe impact on the achievable delay differentiation in priority scheduling
Dual Priority Scheduling is Not Optimal
In dual priority scheduling, periodic tasks are executed in a fixed-priority manner, but each job has two phases with different priorities. The second phase is entered after a fixed amount of time has passed since the release of the job, at which point the job changes its priority. Dual priority scheduling was introduced by Burns and Wellings in 1993 and was shown to successfully schedule many task sets that are not schedulable with ordinary (single) fixed-priority scheduling. Burns and Wellings conjectured that dual priority scheduling is an optimal scheduling algorithm for synchronous periodic tasks with implicit deadlines on preemptive uniprocessors. We demonstrate the falsity of this conjecture, as well as of some related conjectures that have since been stated. This is achieved by means of computer-verified counterexamples
Dual Priority Scheduling is Not Optimal (Artifact)
In dual priority scheduling, periodic tasks are executed in a fixed-priority manner, but each job has two phases with different priorities. The second phase is entered after a fixed amount of time has passed since the release of the job, at which point the job changes its priority. Dual priority scheduling was introduced by Burns and Wellings in 1993 and was shown to successfully schedule many task sets that are not schedulable with ordinary (single) fixed-priority scheduling. Burns and Wellings conjectured that dual priority scheduling is an optimal scheduling algorithm for synchronous periodic tasks with implicit deadlines on preemptive uniprocessors. The related article presents counterexamples to this conjecture, and to some related conjectures that have since been stated. This artifact verifies the counterexamples by means of exhaustive simulations of vast numbers of configurations
- âŠ