215 research outputs found

    Bounding Preemption Delay within Data Cache Reference Patterns for Real-Time Tasks

    Get PDF
    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. In this paper, we bound the penalty of cache interference for real-time tasks by providing accurate predictions of the data cache behavior across preemptions. For every task, we derive data cache reference patterns for all scalar and non-scalar references. Partial timing of a task is performed up to a preemption point using these patterns. The effects of cache interference are then analyzed using a settheoretic approach, which identifies the number and location of additional misses due to preemption. A feedback mechanism provides the means to interact with the timing analyzer, which subsequently times another interval of a task bounded by the next preemption. Our experimental results demonstrate that it is sufficient to consider the n most expensive preemption points, where n is the maximum possible number of preemptions. Further, it is shown that such accurate modeling of data cache behavior in preemptive systems significantly improves the WCET predictions for a task. To the best of our knowledge, our work of bounding preemption delay for data caches is unprecedented

    Tightening the Bounds on Feasible Preemption Points

    Get PDF
    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. This paper makes multiple contributions. 1) We bound the penalty of cache interference for real-time tasks by providing accurate predictions of data cache behavior across preemptions, including instruction cache and pipeline effects. We show that, when considering cache preemption, the critical instant does not occur upon simultaneous release of all tasks. 2) We develop analysis methods to calculate upper bounds on the number of possible preemption points for each job of a task. To make these bounds tight, we consider the entire range between the best-case and worst-case execution times (BCET and WCET) of higher priority jobs. The effects of cache interference are integrated into the WCET calculations by using a feedback mechanism to interact with a static timing analyzer. Significant improvements in tightening bounds of up to an order of magnitude over two prior methods and up to half a magnitude over a third prior method are obtained by experiments for (a) the number of preemptions, (b) the WCET and (c) the response time of a task. Overall, this work contributes by calculating the worst-case preemption delay under consideration of data caches

    Bounding Worst-Case Response Times of Tasks under PIP

    Get PDF
    Schedulability theory in real-time systems requires prior knowledge of the worst-case execution time (WCET) of every task in the system. One method to determine the WCET is known as static timing analysis. Determination of the priorities among tasks in such a system requires a scheduling policy, which could be either preemptive or nonpreemptive. While static timing analysis and data cache analysis are simplified by using a fully non-preemptive scheduling policy, it results in decreased schedulability. In prior work, a methodology was proposed to bound the data-cache related delay for real-time tasks that, beside having a non-preemptive region (critical section), can otherwise be scheduled preemptively. While the prior approach improves schedulability in comparison to fully non-preemptive methods, it is still conservative in its approach due to its fundamental assumption that a task executing in a critical section may not be preempted by any other task. In this paper, we propose a methodology that incorporates resource sharing policies such as the Priority Inheritance Protocol (PIP) into the calculation of data-cache related delay. In this approach, access to shared resources, which is the primary reason for critical sections within tasks, is controlled by the resource sharing policy. In addition to maintaining correctness of access, such policies strive to limit resource access conflicts, thereby improving the responsiveness of tasks. To the best of our knowledge, this is the first framework that integrates data-cache related delay calculations with resource sharing policies in the context of real-time systems

    Runtime CRPD management for rate-based scheduling

    Get PDF
    Temporal isolation is an increasingly relevant con- cern in particular for ARINC-351 and virtualisation- based systems. Traditional approaches like the rate- based scheduling framework RBED do not take into account the impact of preemptions in terms of loss of working set in the acceleration hardware (e.g. caches). While some improvements have been suggested in the literature, they are overly heavy in the presence of small high-priority tasks such as interrupt service routines. Within this paper we propose an approach enabling adaptive assessment of this preemption delay in a tem- poral isolation framework with special consideration of capabilities and limitations of the approach.This work was supported by the Portuguese Science and Technology Foundation (FCT) (CISTER FCT-608) and ARTEMIS-JU (RECOMP project ARTEMIS/0202/2009

    Bounding Worst-Case Response Time for Tasks With Non-Preemptive Regions

    Get PDF
    Real-time schedulability theory requires a priori knowledge of the worst-case execution time (WCET) of every task in the system. Fundamental to the calculation of WCET is a scheduling policy that determines priorities among tasks. Such policies can be non-preemptive or preemptive. While the former reduces analysis complexity and overhead in implementation, the latter provides increased flexibility in terms of schedulability for higher utilizations of arbitrary task sets. In practice, tasks often have non-preemptive regions but are otherwise scheduled preemptively. To bound the WCET of tasks, architectural features have to be considered in the context of a scheduling scheme. In particular, preemption affects caches, which can be modeled by bounding the cache-related preemption delay (CRPD) of a task. In this paper, we propose a framework that provides safe and tight bounds of the data-cache related preemption delay (D-CRPD), the WCET and the worst-case response times, not just for homogeneous tasks under fully preemptive or fully non-preemptive systems, but for tasks with a non-preemptive region. By retaining the option of preemption where legal, task sets become schedulable that might otherwise not be. Yet, by requiring a region within a task to be non-preemptive, correctness is ensured in terms of arbitration of access to shared resources. Experimental results confirm an increase in schedulability of a task set with nonpreemptive regions over an equivalent task set where only those tasks with non-preemptive regions are scheduled nonpreemptively altogether. Quantitative results further indicate that D-CRPD bounds and response-time bounds comparable to task sets with fully non-preemptive tasks can be retained in the presence of short non-preemptive regions. To the best of our knowledge, this is the first framework that performs D-CRPD calculations in a system for tasks with a non-preemptive region

    Accurate estimation of cache-related preemption delay

    Get PDF

    An extensible framework for multicore response time analysis

    Get PDF
    In this paper, we introduce a multicore response time analysis (MRTA) framework, which decouples response time analysis from a reliance on context independent WCET values. Instead, the analysis formulates response times directly from the demands placed on different hardware resources. The MRTA framework is extensible to different multicore architectures, with a variety of arbitration policies for the common interconnects, and different types and arrangements of local memory. We instantiate the framework for single level local data and instruction memories (cache or scratchpads), for a variety of memory bus arbitration policies, including: Round-Robin, FIFO, Fixed-Priority, Processor-Priority, and TDMA, and account for DRAM refreshes. The MRTA framework provides a general approach to timing verification for multicore systems that is parametric in the hardware configuration and so can be used at the architectural design stage to compare the guaranteed levels of real-time performance that can be obtained with different hardware configurations. We use the framework in this way to evaluate the performance of multicore systems with a variety of different architectural components and policies. These results are then used to compose a predictable architecture, which is compared against a reference architecture designed for good average-case behaviour. This comparison shows that the predictable architecture has substantially better guaranteed real-time performance, with the precision of the analysis verified using cycle-accurate simulation

    Effective And Efficient Preemption Placement For Cache Overhead Minimization In Hard Real-Time Systems

    Get PDF
    Schedulability analysis for real-time systems has been the subject of prominent research over the past several decades. One of the key foundations of schedulability analysis is an accurate worst case execution time (WCET) for each task. In preemption based real-time systems, the CRPD can represent a significant component (up to 44% as documented in research literature) of variability to overall task WCET. Several methods have been employed to calculate CRPD with significant levels of pessimism that may result in a task set erroneously declared as non-schedulable. Furthermore, they do not take into account that CRPD cost is inherently a function of where preemptions actually occur. Our approach for computing CRPD via loaded cache blocks (LCBs) is more accurate in the sense that cache state reflects which cache blocks and the specific program locations where they are reloaded. Limited preemption models attempt to minimize preemption overhead (CRPD) by reducing the number of allowed preemptions and/or allowing preemption at program locations where the CRPD effect is minimized. These algorithms rely heavily on accurate CRPD measurements or estimation models in order to identify an optimal set of preemption points. Our approach improves the effectiveness of limited optimal preemption point placement algorithms by calculating the LCBs for each pair of adjacent preemptions to more accurately model task WCET and maximize schedulability as compared to existing preemption point placement approaches. We utilize dynamic programming technique to develop an optimal preemption point placement algorithm. Lastly, we will demonstrate, using a case study, improved task set schedulability and optimal preemption point placement via our new LCB characterization. We propose a new CRPD metric, called loaded cache blocks (LCB) which accurately characterizes the CRPD a real-time task may be subjected to due to the preemptive execution of higher priority tasks. We show how to integrate our new LCB metric into our newly developed algorithms that automatically place preemption points supporting linear control flow graphs (CFGs) for limited preemption scheduling applications. We extend the derivation of loaded cache blocks (LCB), that was proposed for linear control flow graphs (CFGs) to conditional CFGs. We show how to integrate our revised LCB metric into our newly developed algorithms that automatically place preemption points supporting conditional control flow graphs (CFGs) for limited preemption scheduling applications. For future work, we will verify the correctness of our framework through other measurable physical and hardware constraints. Also, we plan to complete our work on developing a generalized framework that can be seamlessly integrated into real-time schedulability analysis
    • …
    corecore