2,929 research outputs found

    Scheduling Self-Suspending Tasks: New and Old Results

    Get PDF
    In computing systems, a job may suspend itself (before it finishes its execution) when it has to wait for certain results from other (usually external) activities. For real-time systems, such self-suspension behavior has been shown to induce performance degradation. Hence, the researchers in the real-time systems community have devoted themselves to the design and analysis of scheduling algorithms that can alleviate the performance penalty due to self-suspension behavior. As self-suspension and delegation of parts of a job to non-bottleneck resources is pretty natural in many applications, researchers in the operations research (OR) community have also explored scheduling algorithms for systems with such suspension behavior, called the master-slave problem in the OR community. This paper first reviews the results for the master-slave problem in the OR literature and explains their impact on several long-standing problems for scheduling self-suspending real-time tasks. For frame-based periodic real-time tasks, in which the periods of all tasks are identical and all jobs related to one frame are released synchronously, we explore different approximation metrics with respect to resource augmentation factors under different scenarios for both uniprocessor and multiprocessor systems, and demonstrate that different approximation metrics can create different levels of difficulty for the approximation. Our experimental results show that such more carefully designed schedules can significantly outperform the state-of-the-art

    Scheduling Self-Suspending Tasks: New and Old Results (Artifact)

    Get PDF
    In computing systems, a job may suspend itself (before it finishes its execution) when it has to wait for certain results from other (usually external) activities. For real-time systems, such self-suspension behavior has been shown to induce performance degradation. Hence, the researchers in the real-time systems community have devoted themselves to the design and analysis of scheduling algorithms that can alleviate the performance penalty due to self-suspension behavior. As self-suspension and delegation of parts of a job to non-bottleneck resources is pretty natural in many applications, researchers in the operations research (OR) community have also explored scheduling algorithms for systems with such suspension behavior, called the master-slave problem in the OR community. This paper first reviews the results for the master-slave problem in the OR literature and explains their impact on several long-standing problems for scheduling self-suspending real-time tasks. For frame-based periodic real-time tasks, in which the periods of all tasks are identical and all jobs related to one frame are released synchronously, we explore different approximation metrics with respect to resource augmentation factors under different scenarios for both uniprocessor and multiprocessor systems, and demonstrate that different approximation metrics can create different levels of difficulty for the approximation. Our experimental results show that such more carefully designed schedules can significantly outperform the state-of-the-art

    Novel Resource Management Mechanisms for Real-Time Scheduling in the Linux Kernel

    Get PDF
    This thesis is part of an ongoing research activity at the Real-Time Systems Laboratory (ReTiS Lab), regarding the study and development of scheduling algorithms for real-time systems. The purpose of this work is the design and development of a new bandwidth reservation scheduling algorithm for managing time-critical tasks that may temporarily suspend their execution, waiting for events (self-suspending tasks). The resulting algorithm has been implemented in the Linux kernel. The current scheduling algorithm used in Linux to manage CPU bandwidth reservation (SCHED_DEADLINE) is based on a server mechanism called Hard Constant Bandwidth Server (H-CBS). However, this mechanism was not designed to manage self-suspending tasks and may lead to possible deadline misses. To solve this problem, a new server mechanism, called the H-CBS-SO algorithm, has been studied. This thesis addresses the design and implementation of a Linux scheduling algorithm based on the H-CBS-SO reservation server. The introduction illustrates the basic notions of real-time systems and presents an overview the actual SCHED_DEADLINE policy and its implementation. Then, the thesis focuses on the explanation of the H-CBS-SO algorithm and how it is developed in the Linux kernel. A multiprocessor version of the algorithm is also reviewed, given the large success and diffusion of Symmetric MultiProcessor (SMP) systems. The full exploitation of the scheduling policy developed in the thesis requires an infrastructure for handling "periodic tasks" that is currently missing in Linux. Therefore, this thesis proposes a new mechanism for managing periodic tasks inside the Linux kernel and a user-space library for exploiting such a new feature. Special attention is dedicated to the description of the debugging techniques used for checking the correctness of the kernel functions and the tools developed for measuring the achieved performance

    k2U: A General Framework from k-Point Effective Schedulability Analysis to Utilization-Based Tests

    Full text link
    To deal with a large variety of workloads in different application domains in real-time embedded systems, a number of expressive task models have been developed. For each individual task model, researchers tend to develop different types of techniques for deriving schedulability tests with different computation complexity and performance. In this paper, we present a general schedulability analysis framework, namely the k2U framework, that can be potentially applied to analyze a large set of real-time task models under any fixed-priority scheduling algorithm, on both uniprocessor and multiprocessor scheduling. The key to k2U is a k-point effective schedulability test, which can be viewed as a "blackbox" interface. For any task model, if a corresponding k-point effective schedulability test can be constructed, then a sufficient utilization-based test can be automatically derived. We show the generality of k2U by applying it to different task models, which results in new and improved tests compared to the state-of-the-art. Analogously, a similar concept by testing only k points with a different formulation has been studied by us in another framework, called k2Q, which provides quadratic bounds or utilization bounds based on a different formulation of schedulability test. With the quadratic and hyperbolic forms, k2Q and k2U frameworks can be used to provide many quantitive features to be measured, like the total utilization bounds, speed-up factors, etc., not only for uniprocessor scheduling but also for multiprocessor scheduling. These frameworks can be viewed as a "blackbox" interface for schedulability tests and response-time analysis

    Supporting Read/Write Applications in Embedded Real-time Systems via Suspension-aware Analysis

    Full text link
    In many embedded real-time systems, applications often interact with I/O devices via read/write operations, which may incur considerable suspension delays. Unfortunately, prior analysis methods for validating timing correctness in embedded systems become quite pessimistic when suspension delays are present. In this paper, we consider the problem of supporting two common types of I/O applications in a multiprocessor system, that is, write-only applications and read-write applications. For the write-only application model, we present a much improved analysis technique that results in only O(m) suspension-related utilization loss, where m is the number of processors. For the second application model, we present a flexible I/O placement strategy and a corresponding new scheduling algorithm, which can completely circumvent the negative impact due to read- and write-induced suspension delays. We illustrate the feasibility of the proposed I/O-placement-based schedule via a case study implementation. Furthermore, experiments presented herein show that the improvement with respect to system utilization over prior methods is often significant

    A Note on the Period Enforcer Algorithm for Self-Suspending Tasks

    Get PDF
    The period enforcer algorithm for self-suspending real-time tasks is a technique for suppressing the "back-to-back" scheduling penalty associated with deferred execution. Originally proposed in 1991, the algorithm has attracted renewed interest in recent years. This note revisits the algorithm in the light of recent developments in the analysis of self-suspending tasks, carefully re-examines and explains its underlying assumptions and limitations, and points out three observations that have not been made in the literature to date: (i) period enforcement is not strictly superior (compared to the base case without enforcement) as it can cause deadline misses in self-suspending task sets that are schedulable without enforcement; (ii) to match the assumptions underlying the analysis of the period enforcer, a schedulability analysis of self-suspending tasks subject to period enforcement requires a task set transformation for which no solution is known in the general case, and which is subject to exponential time complexity (with current techniques) in the limited case of a single self-suspending task; and (iii) the period enforcer algorithm is incompatible with all existing analyses of suspension-based locking protocols, and can in fact cause ever-increasing suspension times until a deadline is missed

    Timing Analysis of Fixed Priority SelfSuspending Sporadic Tasks

    Get PDF
    27th Euromicro Conference on Real-Time Systems (ECRTS 2015), Lund, Sweden.Many real-time systems include tasks that need to suspend their execution in order to externalize some of their operations or to wait for data, events or shared resources. Although commonly encountered in real-world systems, study of their timing analysis is still limited due to the problem complexity. In this paper, we invalidate a claim made in one of the earlier works [1], that led to the common belief that the timing analysis of one self-suspending task interacting with non-self-suspending sporadic tasks is much easier than in the periodic case. This work highlights the complexity of the problem and presents a method to compute the exact worst-case response time (WCRT) of a self-suspending task with one suspension region. However, as the complexity of the analysis might rapidly grow with the number of tasks, we also define an optimization formulation to compute an upper-bound on the WCRT for tasks with multiple suspendion regions. In the experiments, our optimization framework outperforms all previous analysis techniques and often finds the exact WCRT

    Schedulability Analysis of Task Sets with Upper- and Lower-Bound Temporal Constraints

    Get PDF
    Increasingly, real-time systems must handle the self-suspension of tasks (that is, lower-bound wait times between subtasks) in a timely and predictable manner. A fast schedulability test that does not significantly overestimate the temporal resources needed to execute self-suspending task sets would be of benefit to these modern computing systems. In this paper, a polynomial-time test is presented that is known to be the first to handle nonpreemptive self-suspending task sets with hard deadlines, where each task has any number of self-suspensions. To construct the test, a novel priority scheduling policy is leveraged, the jth subtask first, which restricts the behavior of the self-suspending model to provide an analytical basis for an informative schedulability test. In general, the problem of sequencing according to both upper-bound and lower-bound temporal constraints requires an idling scheduling policy and is known to be nondeterministic polynomial-time hard. However, the tightness of the schedulability test and scheduling algorithm are empirically validated, and it is shown that the processor is able to effectively use up to 95% of the self-suspension time to execute tasks.Boeing Scientific Research LaboratoriesNational Science Foundation (U.S.). Graduate Research Fellowship (Grant 2388357

    Reservation-Based Federated Scheduling for Parallel Real-Time Tasks

    Full text link
    This paper considers the scheduling of parallel real-time tasks with arbitrary-deadlines. Each job of a parallel task is described as a directed acyclic graph (DAG). In contrast to prior work in this area, where decomposition-based scheduling algorithms are proposed based on the DAG-structure and inter-task interference is analyzed as self-suspending behavior, this paper generalizes the federated scheduling approach. We propose a reservation-based algorithm, called reservation-based federated scheduling, that dominates federated scheduling. We provide general constraints for the design of such systems and prove that reservation-based federated scheduling has a constant speedup factor with respect to any optimal DAG task scheduler. Furthermore, the presented algorithm can be used in conjunction with any scheduler and scheduling analysis suitable for ordinary arbitrary-deadline sporadic task sets, i.e., without parallelism

    A note on slack enforcement mechanisms for self-suspending tasks

    Get PDF
    This paper provides counterexamples for the slack enforcement mechanisms to handle segmented self-suspending real-time tasks by Lakshmanan and Rajkumar (Proceedings of the Real-Time and Embedded Technology and Applications Symposium (RTAS), pp 3–12, 2010)
    • …
    corecore