260 research outputs found

    Quantifying the Exact Sub-optimality of Non-preemptive Scheduling

    Full text link

    A Note on the Suboptimality of Nonpreemptive Real-time Scheduling

    Get PDF

    Exact Speedup Factors and Sub-Optimality for Non-Preemptive Scheduling

    Get PDF
    Fixed priority scheduling is used in many real-time systems; however, both preemptive and non-preemptive variants (FP-P and FP-NP) are known to be sub-optimal when compared to an optimal uniprocessor scheduling algorithm such as preemptive earliest deadline first (EDF-P). In this paper, we investigate the sub-optimality of fixed priority non-preemptive scheduling. Specifically, we derive the exact processor speed-up factor required to guarantee the feasibility under FP-NP (i.e. schedulability assuming an optimal priority assignment) of any task set that is feasible under EDF-P. As a consequence of this work, we also derive a lower bound on the sub-optimality of non-preemptive EDF (EDF-NP). As this lower bound matches a recently published upper bound for the same quantity, it closes the exact sub-optimality for EDF-NP. It is known that neither preemptive, nor non-preemptive fixed priority scheduling dominates the other, in other words, there are task sets that are feasible on a processor of unit speed under FP-P that are not feasible under FP-NP and vice-versa. Hence comparing these two algorithms, there are non-trivial speedup factors in both directions. We derive the exact speed-up factor required to guarantee the FP-NP feasibility of any FP-P feasible task set. Further, we derive the exact speed-up factor required to guarantee FP-P feasibility of any constrained-deadline FP-NP feasible task set

    Exact Speedup Factors for Linear-Time Schedulability Tests for Fixed-Priority Preemptive and Non-preemptive Scheduling

    Get PDF
    In this paper, we investigate the quality of several linear-time schedulability tests for preemptive and non-preemptive fixed-priority scheduling of uniprocessor systems. The metric used to assess the quality of these tests is the resource augmentation bound commonly known as the processor speedup factor. The speedup factor of a schedulability test corresponds to the smallest factor by which the processing speed of a uniprocessor needs to be increased such that any task set that is feasible under an optimal preemptive (non-preemptive) work-conserving scheduling algorithm is guaranteed to be schedulable with preemptive (non-preemptive) fixed priority scheduling if this scheduling test is used, assuming an appropriate priority assignment. We show the surprising result that the exact speedup factors for Deadline Monotonic (DM) priority assignment combined with sufficient linear-time schedulability tests for implicit-, constrained-, and arbitrary-deadline task sets are the same as those obtained for optimal priority assignment policies combined with exact schedulability tests. Thus in terms of the speedup-factors required, there is no penalty in using DM priority assignment and simple linear schedulability tests

    On the Pitfalls of Resource Augmentation Factors and Utilization Bounds in Real-Time Scheduling

    Get PDF
    In this paper, we take a careful look at speedup factors, utilization bounds, and capacity augmentation bounds. These three metrics have been widely adopted in real-time scheduling research as the de facto standard theoretical tools for assessing scheduling algorithms and schedulability tests. Despite that, it is not always clear how researchers and designers should interpret or use these metrics. In studying this area, we found a number of surprising results, and related to them, ways in which the metrics may be misinterpreted or misunderstood. In this paper, we provide a perspective on the use of these metrics, guiding researchers on their meaning and interpretation, and helping to avoid pitfalls in their use. Finally, we propose and demonstrate the use of parametric augmentation functions as a means of providing nuanced information that may be more relevant in practical settings

    k2U: A General Framework from k-Point Effective Schedulability Analysis to Utilization-Based Tests

    Full text link
    To deal with a large variety of workloads in different application domains in real-time embedded systems, a number of expressive task models have been developed. For each individual task model, researchers tend to develop different types of techniques for deriving schedulability tests with different computation complexity and performance. In this paper, we present a general schedulability analysis framework, namely the k2U framework, that can be potentially applied to analyze a large set of real-time task models under any fixed-priority scheduling algorithm, on both uniprocessor and multiprocessor scheduling. The key to k2U is a k-point effective schedulability test, which can be viewed as a "blackbox" interface. For any task model, if a corresponding k-point effective schedulability test can be constructed, then a sufficient utilization-based test can be automatically derived. We show the generality of k2U by applying it to different task models, which results in new and improved tests compared to the state-of-the-art. Analogously, a similar concept by testing only k points with a different formulation has been studied by us in another framework, called k2Q, which provides quadratic bounds or utilization bounds based on a different formulation of schedulability test. With the quadratic and hyperbolic forms, k2Q and k2U frameworks can be used to provide many quantitive features to be measured, like the total utilization bounds, speed-up factors, etc., not only for uniprocessor scheduling but also for multiprocessor scheduling. These frameworks can be viewed as a "blackbox" interface for schedulability tests and response-time analysis

    Mixed Criticality Systems with Weakly-Hard Constraints

    Get PDF
    Mixed criticality systems contain components of at least two criticality levels which execute on a common hardware platform in order to more efficiently utilise re- sources. Due to multiple worst-case execution time estimates, current adaptive mixed criticality scheduling policies assume the notion of a low criticality mode where by a taskset executes under a set of more realistic temporal assumptions and a high criticality mode, in which all low criticality tasks in the taskset are descheduled, to ensure that high criticality tasks can meet more conservative timing constraints derived from certification approved methods. This issue is known as the service abrupt problem and comprises the topic of this work. The principles of real-time schedulability analysis are first reviewed, providing relevant background and theory on which mixed criticality systems analysis is based. The current state-of-the-art of mixed criticality systems scheduling policies on uni-processor systems are then discussed along with the major challenges facing the adoption of such approaches in practice. To address the service abrupt issue this work presents a new policy, Adaptive Mixed Criticality - Weakly Hard which provides a guaranteed minimum quality of service for low criticality tasks in the event of a criticality mode change. Two offline response time based schedulability tests are derived for this model and dominance relationship proved. Empirical evaluations are then used to assess the relative performance against previously published policies and their schedulability tests, where the new policy is shown to offer a scalable performance trade-off between existing fixed priority preemptive and adaptive mixed criticality policies. The work concludes with possible directions for future research
    • …
    corecore