11 research outputs found

    Generalized tardiness bounds for global multiprocessor scheduling

    Get PDF
    We consider the issue of deadline tardiness under global multiprocessor scheduling algorithms. We present a general tardiness-bound derivation that is applicable to a wide variety of such algorithms (including some whose tardiness behavior has not been analyzed before). Our derivation is very general: job priorities may change rather arbitrarily at runtime, capacity restrictions may exist on certain processors, and, under certain conditions, non-preemptive regions are allowed. Our results show that, with the exception of static-priority algorithms, most global algorithms considered previously have bounded tardiness. In addition, our results provide a simple means for checking whether tardiness is bounded under newly-developed algorithms

    A hierarchical multiprocessor bandwidth reservation scheme with timing guarantees

    Get PDF
    A multiprocessor scheduling scheme is presented for supporting hierarchical containers that encapsulate sporadic soft and hard real-time tasks. In this scheme, each container is allocated a specified bandwidth, which it uses to schedule its children (some of which may also be containers). This scheme is novel in that, with only soft real-time tasks, no utilization loss is incurred when provisioning containers, even in arbitrarily deep hierarchies. Presented experiments show that the proposed scheme performs well compared to conventional real-time scheduling techniques that do not provide container isolation

    Fair lateness scheduling: reducing maximum lateness in G-EDF-like scheduling

    Get PDF
    In prior work on soft real-time (SRT) multiprocessor scheduling, tardiness bounds have been derived for a variety of scheduling algorithms, most notably, the global earliest-deadline-first (G-EDF) algorithm. In this paper, we devise G-EDF-like (GEL) schedulers, which have identical implementations to G-EDF and therefore the same overheads, but that provide better tardiness bounds. We discuss how to analyze these schedulers and propose methods to determine scheduler parameters to meet several different tardiness bound criteria. We employ linear programs to adjust such parameters to optimize arbitrary tardiness criteria, and to analyze lateness bounds (lateness is related to tardiness). We also propose a particular scheduling algorithm, namely the global fair lateness (G-FL) algorithm, to minimize maximum absolute lateness bounds. Unlike the other schedulers described in this paper, G-FL only requires linear programming for analysis. We argue that our proposed schedulers, such as G-FL, should replace G-EDF for SRT applications

    Global EDF Scheduling for Parallel Real-Time Tasks

    Get PDF
    As multicore processors become ever more prevalent, it is important for real-time programs to take advantage of intra-task parallelism in order to support computation-intensive applications with tight deadlines. In this thesis, we consider the Global Earliest Deadline First (GEDF) scheduling policy for task sets consisting of parallel tasks. Each task can be represented by a directed acyclic graph (DAG) where nodes represent computational work and edges represent dependences between nodes. In this model, we prove that GEDF provides a capacity augmentation bound of 4-2/m and a resource augmentation bound of 2-1/m. The capacity augmentation bound acts as a linear-time schedulability test since it guarantees that any task set with total utilization of at most m/(4-2/m) where each task\u27s critical-path length is at most 1/(4-2/m) of its deadline is schedulable on m cores under GEDF. In addition, we present a pseudo-polynomial time fixed-point schedulability test for GEDF; this test uses a carry-in work calculation based on the proof for the capacity bound. Finally, we present and evaluate a prototype platform --- called PGEDF --- for scheduling parallel tasks using GEDF. PGEDF is built by combining the GNU OpenMP runtime system and the LITMUS_RT operating system. This platform allows programmers to write parallel OpenMP tasks and specify real-time parameters such as deadlines for tasks. We perform two kinds of experiments to evaluate the performance of GEDF for parallel tasks. (1) We run numerical simulations for DAG tasks. (2) We execute randomly generated tasks using PGEDF. Both sets of experiments indicate that GEDF performs surprisingly well and outperforms an existing scheduling techniques that involves task decomposition

    Compositional Analysis Techniques For Multiprocessor Soft Real-Time Scheduling

    Get PDF
    The design of systems in which timing constraints must be met (real-time systems) is being affected by three trends in hardware and software development. First, in the past few years, multiprocessor and multicore platforms have become standard in desktop and server systems and continue to expand in the domain of embedded systems. Second, real-time concepts are being applied in the design of general-purpose operating systems (like Linux) and attempts are being made to tailor these systems to support tasks with timing constraints. Third, in many embedded systems, it is now more economical to use a single multiprocessor instead of several uniprocessor elements; this motivates the need to share the increasing processing capacity of multiprocessor platforms among several applications supplied by different vendors and each having different timing constraints in a manner that ensures that these constraints were met. These trends suggest the need for mechanisms that enable real-time tasks to be bundled into multiple components and integrated in larger settings. There is a substantial body of prior work on the multiprocessor schedulability analysis of real-time systems modeled as periodic and sporadic task systems. Unfortunately, these standard task models can be pessimistic if long chains of dependent tasks are being analyzed. In work that introduces less pessimistic and more sophisticated workload models, only partitioned scheduling is assumed so that each task is statically assigned to some processor. This results in pessimism in the amount of needed processing resources. In this dissertation, we extend prior work on multiprocessor soft real-time scheduling and construct new analysis tools that can be used to design component-based soft real-time systems. These tools allow multiprocessor real-time systems to be designed and analyzed for which standard workload and platform models are inapplicable and for which state-of-the-art uniprocessor and multiprocessor analysis techniques give results that are too pessimistic

    Generalized tardiness bounds for global multiprocessor scheduling

    No full text
    We consider the issue of deadline tardiness under global multiprocessor scheduling algorithms. We present a general tardiness-bound derivation that is applicable to a wide variety of such algorithms (including some whose tardiness behavior has not been analyzed before). Our derivation is very general: job priorities may change rather arbitrarily at runtime, capacity restrictions may exist on certain processors, and, under certain conditions, non-preemptive regions are allowed. Our results show that, with the exception of static-priority algorithms, most global algorithms considered previously have bounded tardiness. In addition, our results provide a simple means for checking whether tardiness is bounded under newly-developed algorithms.

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler

    On the design and implementation of a cache-aware soft real-time scheduler for multicore platforms

    Get PDF
    Real-time systems are those for which timing constraints must be satisfied. In this dissertation, research on multiprocessor real-time systems is extended to support multicore platforms, which contain multiple processing cores on a single chip. Specifically, this dissertation focuses on designing a cache-aware real-time scheduler to reduce shared cache miss rates, and increase the level of shared cache reuse, on multicore platforms when timing constraints must be satisfied. This scheduler, implemented in Linux, employs: (1) a scheduling method for real-time workloads that satisfies timing constraints while making scheduling choices that reduce shared cache miss rates; and (2) a profiler that quantitatively approximates the cache impact of every task during its execution. In experiments, it is shown that the proposed cache-aware scheduler can result in significantly reduced shared cache miss rates over other approaches. This is especially true when sufficient hardware support is provided, primarily in the form of cache-related performance monitoring features. It is also shown that scheduler-related overheads are comparable to other scheduling approaches, and therefore overheads would not be expected to offset any reduction in cache miss rate. Finally, in experiments involving a multimedia server workload, it was found that the use of the proposed cache-aware scheduler allowed the size of the workload to be increased. Prior work in the area of cache-aware scheduling for multicore platforms has not addressed support for real-time workloads, and prior work in the area of real-time scheduling has not addressed shared caches on multicore platforms. For real-time workloads running on multicore platforms, a decrease in shared cache miss rates can result in a corresponding decrease in execution times, which may allow a larger real-time workload to be supported, or hardware requirements (or costs) to be reduced. As multicore platforms are becoming ubiquitous in many domains, including those in which real-time constraints must be satisfied, cache-aware scheduling approaches such as that presented in this dissertation are of growing importance. If the chip manufacturing industry continues to adhere to the multicore paradigm (which is likely, given current projections), then such approaches should remain relevant as processors evolve

    Tardiness Bounds and Overload in Soft Real-Time Systems

    Get PDF
    In some systems, such as future generations of unmanned aerial vehicles (UAVs), different software running on the same machine will require different timing guarantees. For example, flight control software has hard real-time (HRT) requirements---if a job (i.e., invocation of a program) completes late, then safety may be compromised, so jobs must be guaranteed to complete within short deadlines. However, mission control software is likely to have soft real-time (SRT) requirements---if a job completes slightly late, the result is not likely to be catastrophic, but lateness should never be unbounded. The global earliest-deadline-first (G-EDF) scheduler has been demonstrated to be useful for the multiprocessor scheduling of software with SRT requirements, and the multicore mixed-criticality (MC2) framework using G-EDF for SRT scheduling has been proposed to safely mix HRT and SRT work on multicore UAV platforms. This dissertation addresses limitations of this prior work. G-EDF is attractive for SRT systems because it allows the system to be fully utilized with reasonable overheads. Furthermore, previous analysis of G-EDF can provide "lateness bounds" on the amount of time between a job's deadline and its completion. However, smaller lateness bounds are preferable, and some programs may be more sensitive to lateness than others. In this dissertation, we explore the broader category of G-EDF-like (GEL) schedulers that have identical overhead characteristics to G-EDF. We show that by choosing GEL schedulers other than G-EDF, better lateness can be achieved, and that certain modifications can further improve lateness bounds while maintaining reasonable overheads. Specifically, successive jobs from the same program can be permitted to run in parallel with each other, or jobs can be split into smaller pieces by the operating system. Previous analysis of MC2 has always used less pessimistic execution time assumptions when analyzing SRT work than when analyzing HRT work. These assumptions can be violated, creating an overload that causes SRT guarantees to be violated. Furthermore, even in the expected case that such violations are transient, the system is not guaranteed to return to its normal operation. In this dissertation, we also provide a mechanism that can be used to provide such recovery.Doctor of Philosoph
    corecore