423 research outputs found

    Bounding Worst-Case Response Time for Tasks With Non-Preemptive Regions

    Get PDF
    Real-time schedulability theory requires a priori knowledge of the worst-case execution time (WCET) of every task in the system. Fundamental to the calculation of WCET is a scheduling policy that determines priorities among tasks. Such policies can be non-preemptive or preemptive. While the former reduces analysis complexity and overhead in implementation, the latter provides increased flexibility in terms of schedulability for higher utilizations of arbitrary task sets. In practice, tasks often have non-preemptive regions but are otherwise scheduled preemptively. To bound the WCET of tasks, architectural features have to be considered in the context of a scheduling scheme. In particular, preemption affects caches, which can be modeled by bounding the cache-related preemption delay (CRPD) of a task. In this paper, we propose a framework that provides safe and tight bounds of the data-cache related preemption delay (D-CRPD), the WCET and the worst-case response times, not just for homogeneous tasks under fully preemptive or fully non-preemptive systems, but for tasks with a non-preemptive region. By retaining the option of preemption where legal, task sets become schedulable that might otherwise not be. Yet, by requiring a region within a task to be non-preemptive, correctness is ensured in terms of arbitration of access to shared resources. Experimental results confirm an increase in schedulability of a task set with nonpreemptive regions over an equivalent task set where only those tasks with non-preemptive regions are scheduled nonpreemptively altogether. Quantitative results further indicate that D-CRPD bounds and response-time bounds comparable to task sets with fully non-preemptive tasks can be retained in the presence of short non-preemptive regions. To the best of our knowledge, this is the first framework that performs D-CRPD calculations in a system for tasks with a non-preemptive region

    Experimental Evaluation of Cache-Related Preemption Delay Aware Timing Analysis

    Get PDF
    In the presence of caches, preemptive scheduling may incur a significant overhead referred to as cache-related preemption delay (CRPD). CRPD is caused by preempting tasks evicting cached memory blocks of preempted tasks, which have to be reloaded when the preempted tasks resume their execution. In this paper we experimentally evaluate state-of-the-art techniques to account for the CRPD during timing analysis. We find that purely synthetically-generated task sets may yield misleading conclusions regarding the relative precision of different CRPD analysis techniques and the impact of CRPD on schedulability in general. Based on task characterizations obtained by static worst-case execution time (WCET) analysis, we shed new light on the state of the art

    Bounding Preemption Delay within Data Cache Reference Patterns for Real-Time Tasks

    Get PDF
    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. In this paper, we bound the penalty of cache interference for real-time tasks by providing accurate predictions of the data cache behavior across preemptions. For every task, we derive data cache reference patterns for all scalar and non-scalar references. Partial timing of a task is performed up to a preemption point using these patterns. The effects of cache interference are then analyzed using a settheoretic approach, which identifies the number and location of additional misses due to preemption. A feedback mechanism provides the means to interact with the timing analyzer, which subsequently times another interval of a task bounded by the next preemption. Our experimental results demonstrate that it is sufficient to consider the n most expensive preemption points, where n is the maximum possible number of preemptions. Further, it is shown that such accurate modeling of data cache behavior in preemptive systems significantly improves the WCET predictions for a task. To the best of our knowledge, our work of bounding preemption delay for data caches is unprecedented

    Bounding the Effects of Resource Access Protocols on Cache Behavior

    Get PDF
    The assumption of task independence has long been consubstantial with the formulation of many schedulability analysis techniques. That assumption is evidently advantageous for the mathematical formulation of the analysis equations, but ill fit to capture the actual behavior of the system. Resource sharing is one of the system design dimensions that break the assumption of task independence. By shaking the very foundations of the real-time analysis theory, the advent of multicore systems has caused resurgence of interest in resource sharing and synchronization protocols, and also dawned the fact that the assumption of task independence may be forever broken. Research in cache-aware schedulability analysis instead has paid very little attention to the impact that synchronization protocols may have on cache behavior. A blocked task may in fact incur time penalties similar in kind to those caused by preemption, in that some useful code or data already loaded in the cache may be evicted while the task is blocked. In this paper we characterize the sources of cache-related blocking delay (CRBD). We then provide a bound on the CRBD for three synchronization protocols of interest. The comparison between these bounds provides striking evidence that an informed choice of the synchronization protocol helps contain the perturbing effects of blocking on the cache state

    Tightening the Bounds on Feasible Preemption Points

    Get PDF
    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. This paper makes multiple contributions. 1) We bound the penalty of cache interference for real-time tasks by providing accurate predictions of data cache behavior across preemptions, including instruction cache and pipeline effects. We show that, when considering cache preemption, the critical instant does not occur upon simultaneous release of all tasks. 2) We develop analysis methods to calculate upper bounds on the number of possible preemption points for each job of a task. To make these bounds tight, we consider the entire range between the best-case and worst-case execution times (BCET and WCET) of higher priority jobs. The effects of cache interference are integrated into the WCET calculations by using a feedback mechanism to interact with a static timing analyzer. Significant improvements in tightening bounds of up to an order of magnitude over two prior methods and up to half a magnitude over a third prior method are obtained by experiments for (a) the number of preemptions, (b) the WCET and (c) the response time of a task. Overall, this work contributes by calculating the worst-case preemption delay under consideration of data caches

    ResilienceP Analysis: Bounding Cache Persistence Reload Overhead for Set-Associative Caches

    Get PDF
    3rd Doctoral Congress in Engineering will be held at FEUP on the 27th to 28th of June, 2019This work presents different approaches to calculate CPRO for set-associative caches. The PCB-ECB approach uses PCBs of the task under analysis and ECBs of all other tasks in the system to provide sound estimates of CPRO for set-associative caches. The resilienceP analysis then removes some of the pessimism in the PCB-ECB approach by considering the resilience of PCBs during CPRO calculations. We show that using the state-of-the-art (SoA) resilience analysis to calculate resilience of PCBs may result in underestimating the CPRO tasks may suffer. Finally, we have also presented a multi-set alike resilienceP analysis that highlights the pessimism in the resilienceP analysis and provides some insights on how it can be removed.info:eu-repo/semantics/publishedVersio

    Bounding Worst-Case Response Times of Tasks under PIP

    Get PDF
    Schedulability theory in real-time systems requires prior knowledge of the worst-case execution time (WCET) of every task in the system. One method to determine the WCET is known as static timing analysis. Determination of the priorities among tasks in such a system requires a scheduling policy, which could be either preemptive or nonpreemptive. While static timing analysis and data cache analysis are simplified by using a fully non-preemptive scheduling policy, it results in decreased schedulability. In prior work, a methodology was proposed to bound the data-cache related delay for real-time tasks that, beside having a non-preemptive region (critical section), can otherwise be scheduled preemptively. While the prior approach improves schedulability in comparison to fully non-preemptive methods, it is still conservative in its approach due to its fundamental assumption that a task executing in a critical section may not be preempted by any other task. In this paper, we propose a methodology that incorporates resource sharing policies such as the Priority Inheritance Protocol (PIP) into the calculation of data-cache related delay. In this approach, access to shared resources, which is the primary reason for critical sections within tasks, is controlled by the resource sharing policy. In addition to maintaining correctness of access, such policies strive to limit resource access conflicts, thereby improving the responsiveness of tasks. To the best of our knowledge, this is the first framework that integrates data-cache related delay calculations with resource sharing policies in the context of real-time systems

    ResilienceP Analysis: Bounding Cache Persistence Reload Overhead for Set-Associative Caches

    Get PDF
    This work presents different approaches to calculate CPRO for set-associative caches. The PCB-ECB approach uses PCBs of the task under analysis and ECBs of all other tasks in the system to provide sound estimates of CPRO for set-associative caches. The resilienceP analysis then removes some of the pessimism in the PCB-ECB approach by considering the resilience of PCBs during CPRO calculations. We show that using the state-of-the-art (SoA) resilience analysis to calculate resilience of PCBs may result in underestimating the CPRO tasks may suffer. Finally, we have also presented a multi-set alike resilienceP analysis that highlights the pessimism in the resilienceP analysis and provides some insights on how it can be removed.info:eu-repo/semantics/publishedVersio

    Cache-Related Preemption Delay Computation for Set-Associative Caches - Pitfalls and Solutions

    Get PDF
    In preemptive real-time systems, scheduling analyses need - in addition to the worst-case execution time - the context-switch cost. In case of preemption, the preempted and the preempting task may interfere on the cache memory. These interferences lead to additional reloads in the preempted task. The delay due to these reloads is referred to as the cache-related preemption delay (CRPD). The CRPD constitutes a large part of the context-switch cost. In this article, we focus on the computation of upper bounds on the CRPD based on the concepts of useful cache blocks (UCBs) and evicting cache blocks (ECBs). We explain how these concepts can be used to bound the CRPD in case of direct-mapped caches. Then we consider set-associative caches with LRU, FIFO, and PLRU replacement. We show potential pitfalls when using UCBs and ECBs to bound the CRPD in case of LRU and demonstrate that neither UCBs nor ECBs can be used to bound the CRPD in case of FIFO and PLRU. Finally, we sketch a new approach to circumvent these limitations by using the concept of relative competitiveness

    Runtime CRPD management for rate-based scheduling

    Get PDF
    Temporal isolation is an increasingly relevant con- cern in particular for ARINC-351 and virtualisation- based systems. Traditional approaches like the rate- based scheduling framework RBED do not take into account the impact of preemptions in terms of loss of working set in the acceleration hardware (e.g. caches). While some improvements have been suggested in the literature, they are overly heavy in the presence of small high-priority tasks such as interrupt service routines. Within this paper we propose an approach enabling adaptive assessment of this preemption delay in a tem- poral isolation framework with special consideration of capabilities and limitations of the approach.This work was supported by the Portuguese Science and Technology Foundation (FCT) (CISTER FCT-608) and ARTEMIS-JU (RECOMP project ARTEMIS/0202/2009
    • …
    corecore