59 research outputs found

    Tightening the Bounds on Cache-Related Preemption Delay in Fixed Preemption Point Scheduling

    Get PDF
    Limited Preemptive Fixed Preemption Point scheduling (LP-FPP) has the ability to decrease and control the preemption-related overheads in the real-time task systems, compared to other limited or fully preemptive scheduling approaches. However, existing methods for computing the preemption overheads in LP-FPP systems rely on over-approximation of the evicting cache blocks (ECB) calculations, potentially leading to pessimistic schedulability analysis. In this paper, we propose a novel method for preemption cost calculation that exploits the benefits of the LP-FPP task model both at the scheduling and cache analysis level. The method identifies certain infeasible preemption combinations, based on analysis on the scheduling level, and combines it with cache analysis information into a constraint problem from which less pessimistic upper bounds on cache-related preemption delays (CRPD) can be derived. The evaluation results indicate that our proposed method has the potential to significantly reduce the upper bound on CRPD, by up to 50% in our experiments, compared to the existing over-approximating calculations of the eviction scenarios

    Tightening the Bounds on Feasible Preemption Points

    Get PDF
    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. This paper makes multiple contributions. 1) We bound the penalty of cache interference for real-time tasks by providing accurate predictions of data cache behavior across preemptions, including instruction cache and pipeline effects. We show that, when considering cache preemption, the critical instant does not occur upon simultaneous release of all tasks. 2) We develop analysis methods to calculate upper bounds on the number of possible preemption points for each job of a task. To make these bounds tight, we consider the entire range between the best-case and worst-case execution times (BCET and WCET) of higher priority jobs. The effects of cache interference are integrated into the WCET calculations by using a feedback mechanism to interact with a static timing analyzer. Significant improvements in tightening bounds of up to an order of magnitude over two prior methods and up to half a magnitude over a third prior method are obtained by experiments for (a) the number of preemptions, (b) the WCET and (c) the response time of a task. Overall, this work contributes by calculating the worst-case preemption delay under consideration of data caches

    Limited Preemptive Scheduling for Real-Time Systems: a Survey

    Get PDF
    The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations

    Optimal Selection of Preemption Points to Minimize Preemption Overhead

    Get PDF
    A central issue for verifying the schedulability of hard real-time systems is the correct evaluation of task execution times. These values are significantly influenced by the preemption overhead, which mainly includes the cache related delays and the context switch times introduced by each preemption. Since such an overhead significantly depends on the particular point in the code where preemption takes place, this paper proposes a method for placing suitable preemption points in each task in order to maximize the chances of finding a schedulable solution. In a previous work, we presented a method for the optimal selection of preemption points under the restrictive assumption of a fixed preemption cost, identical for each preemption point. In this paper, we remove such an assumption, exploring a more realistic and complex scenario where the preemption cost varies throughout the task code. Instead of modeling the problem with an integer programming formulation, with exponential worst-case complexity, we derive an optimal algorithm that has a linear time and space complexity. This somewhat surprising result allows selecting the best preemption points even in complex scenarios with a large number of potential preemption locations. Experimental results are also presented to show the effectiveness of the proposed approach in increasing the system schedulability

    Explicit Preemption Placement For Real-Time Conditional Code Via Graph Grammars And Dynamic Programming

    Get PDF
    Traditional worst-case execution time (WCET) analysis must make very pessimistic assumptions regarding the cost of preemptions for a real-time job. For every potential preemption point, the analysis must add to the WCET of a job the cache-related preemption delay (CRPD) incurred due to the contention for memory resources with other jobs in the system. However, recent work has shown that CRPD can vary at each preemption point (due to the cache lines that must be reloaded for subsequent code after the preemption). Using this observation and information obtained from schedulability analysis on the maximum length of the non-preemptive region of a job, we seek to find the optimal set of explicit preemption-points (EPPs) that minimize the WCET and ensure system schedulability. Utilizing graph grammars and dynamic programming, we develop a pseudo-polynomial-time algorithm that is capable of analyzing jobs that can be represented by control flowgraphs with arbitrarily-nested conditional structures. This algorithm extends previous work that could only handle sequential flowgraphs. Exhaustive experiments are included to show that the proposed approach is able to significantly improve the bounds on the worst-case execution times of limited preemptive tasks

    Runtime CRPD management for rate-based scheduling

    Get PDF
    Temporal isolation is an increasingly relevant con- cern in particular for ARINC-351 and virtualisation- based systems. Traditional approaches like the rate- based scheduling framework RBED do not take into account the impact of preemptions in terms of loss of working set in the acceleration hardware (e.g. caches). While some improvements have been suggested in the literature, they are overly heavy in the presence of small high-priority tasks such as interrupt service routines. Within this paper we propose an approach enabling adaptive assessment of this preemption delay in a tem- poral isolation framework with special consideration of capabilities and limitations of the approach.This work was supported by the Portuguese Science and Technology Foundation (FCT) (CISTER FCT-608) and ARTEMIS-JU (RECOMP project ARTEMIS/0202/2009

    Optimal Selection of Preemption Points to Minimize Preemption Overhead

    Get PDF
    Abstract—A central issue for verifying the schedulability of hard real-time systems is the correct evaluation of task execution times. These values are significantly influenced by the preemption overhead, which mainly includes the cache related delays and the context switch times introduced by each preemption. Since such an overhead significantly depends on the particular point in the code where preemption takes place, this paper proposes a method for placing suitable preemption points in each task in order to maximize the chances of finding a schedulable solution. In a previous work, we presented a method for the optimal selection of preemption points under the restrictive assumption of a fixed preemption cost, identical for each preemption point. In this paper, we remove such an assumption, exploring a more realistic and complex scenario where the preemption cost varies throughout the task code. Instead of modeling the problem with an integer programming formulation, with exponential worst-case complexity, we derive an optimal algorithm that has a linear time and space complexity. This somewhat surprising result allows selecting the best preemption points even in complex scenarios with a large number of potential preemption locations. Experimental results are also presented to show the effectiveness of the proposed approach in increasing the system schedulability.

    Reducing Timing Interferences in Real-Time Applications Running on Multicore Architectures

    Get PDF
    We introduce a unified wcet analysis and scheduling framework for real-time applications deployed on multicore architectures. Our method does not follow a particular programming model, meaning that any piece of existing code (in particular legacy) can be re-used, and aims at reducing automatically the worst-case number of timing interferences between tasks. Our method is based on the notion of Time Interest Points (tips), which are instructions that can generate and/or suffer from timing interferences. We show how such points can be extracted from the binary code of applications and selected prior to performing the wcet analysis. We then represent real-time tasks as sequences of time intervals separated by tips, and schedule those tasks so that the overall makespan (including the potential timing penalties incurred by interferences) is minimized. This scheduling phase is performed using an Integer Linear Programming (ilp) solver. Preliminary results on state-of-the-art benchmarks show promising results and pave the way for future extensions of the model and optimizations

    Minimising total flow-time on two parallel machines with planned downtimes and resumable jobs

    Get PDF
    This study considers the flow-time minimisation problem on two identical parallel machines, where the machines are subject to planned downtimes. Jobs that are interrupted due to machine downtimes may resume processing once the machine becomes available without incurring additional costs or setup times. We identify optimality properties, develop upper and lower bounding procedures, and present a branch and bound methodology based on the proposed properties and bounds. The computational efficiency is tested on problem instances with up to 35 jobs
    • …
    corecore