409 research outputs found

    Bundle: Taming The Cache And Improving Schedulability Of Multi-Threaded Hard Real-Time Systems

    Get PDF
    For hard real-time systems, schedulability of a task set is paramount. If a task set is not deemed schedulable under all conditions, the system may fail during operation and cannot be deployed in a high risk environment. Schedulability testing has typically been separated from worst-case execution time (WCET) analysis. Each task’s WCET value is calculated independently and provided as input to a schedulability test. However, a task’s WCET value is influenced by scheduling decisions and the impact of cache memory. Thus, schedulability tests have been augmented to include cache-related preemption delay (CRPD). From this classical perspective, the effect of cache memory on WCET and schedulability is always negative; increasing execution times and demand. In this work we propose a new positive perspective, where cache memory benefits multi-threaded tasks by scheduling threads in a manner that shares values predictably. This positive perspective is reached by integrating, rather than separating the disciplines of schedulability analysis and worst-case execution time. These integrated techniques are referred to as the BUNDLE family of worst-case execution time and cache overhead (WCETO) analysis and scheduling algorithm. WCETO calculation divides the task’s structure into conflict free regions and calculates a bound utilizing explicit understanding of the thread-level scheduling algorithm. Conflict free regions are utilized by the scheduling algorithm, which associates with each region a thread container called a bundle. At any time only one bundle may be active, and only threads of the active bundle may execute on the processor. The BUNDLE family of scheduling algorithms developed in this work increase in scope from BUNDLE through ITCB-DAG. As the fundamental contribution, BUNDLE and BUNDLEP apply to a single multi-threaded task running on a uniprocessor architecture with a single level direct mapped instruction cache. NPM-BUNDLE expands the positive perspective to multiple tasks on a uniprocessor system. With ITCB-DAG bringing BUNDLE’s analysis and scheduling techniques to multi-processor systems. Each of the scheduling algorithms require a novel hardware mechanism to anticipate execution and make scheduling decisions. To support anticipation of execution, a novel XFLICT interrupt is proposed. It is a simple mechanism that emulates the behavior of hardware breakpoints. An implementation of the BUNDLEP analytical techniques, scheduling algorithm, and XFLICT interrupt is available as a simulated platform for further research and extension. Future work is planned to expand BUNDLE’s positive perspective and increase adoption. The most significant barrier to adoption is the ability to deploy BUNDLE’s scheduling algorithm, this mandates a viable and available hardware or software mechanism to anticipate execution. NPM-BUNDLE is limited to non-preemptive multi-task scheduling and analysis, support for preemptive scheduling will increase the positive impact of BUNDLE’s integrated perspective

    NPM-BUNDLE: Non-Preemptive Multitask Scheduling for Jobs with BUNDLE-Based Thread-Level Scheduling

    Get PDF
    The BUNDLE and BUNDLEP scheduling algorithms are cache-cognizant thread-level scheduling algorithms and associated worst case execution time and cache overhead (WCETO) techniques for hard real-time multi-threaded tasks. The BUNDLE-based approaches utilize the inter-thread cache benefit to reduce WCETO values for jobs. Currently, the BUNDLE-based approaches are limited to scheduling a single task. This work aims to expand the applicability of BUNDLE-based scheduling to multiple task multi-threaded task sets. BUNDLE-based scheduling leverages knowledge of potential cache conflicts to selectively preempt one thread in favor of another from the same job. This thread-level preemption is a requirement for the run-time behavior and WCETO calculation to receive the benefit of BUNDLE-based approaches. This work proposes scheduling BUNDLE-based jobs non-preemptively according to the earliest deadline first (EDF) policy. Jobs are forbidden from preempting one another, while threads within a job are allowed to preempt other threads. An accompanying schedulability test is provided, named Threads Per Job (TPJ). TPJ is a novel schedulability test, input is a task set specification which may be transformed (under certain restrictions); dividing threads among tasks in an effort to find a feasible task set. Enhanced by the flexibility to transform task sets and taking advantage of the inter-thread cache benefit, the evaluation shows TPJ scheduling task sets fully preemptive EDF cannot

    Schedulability Analysis for Multi-Core Systems Accounting for Resource Stress and Sensitivity

    Get PDF
    Timing verification of multi-core systems is complicated by contention for shared hardware resources between co-running tasks on different cores. This paper introduces the Multi-core Resource Stress and Sensitivity (MRSS) task model that characterizes how much stress each task places on resources and how much it is sensitive to such resource stress. This model facilitates a separation of concerns, thus retaining the advantages of the traditional two-step approach to timing verification (i.e. timing analysis followed by schedulability analysis). Response time analysis is derived for the MRSS task model, providing efficient context-dependent and context independent schedulability tests for both fixed priority preemptive and fixed priority non-preemptive scheduling. Dominance relations are derived between the tests, and proofs of optimal priority assignment provided. The MRSS task model is underpinned by a proof-of-concept industrial case study

    Schedulability Analysis for Certification-friendly Multicore Systems

    Get PDF
    This paper presents a new schedulability test for safety-critical software undergoing a transition from single-core to multicore systems - a challenge faced by multiple industries today. Our migration model, consisting of a schedulability test and execution model, is distinguished by three aspects consistent with reducing transition cost. First, it assumes externally-driven scheduling parameters, such as periods and deadlines, remain fixed (and thus known), whereas exact computation times are not. Second, it adopts a globally synchronized conflict-free I/O model that leads to a decoupling between cores, simplifying the schedulability analysis. Third, it employs global priority assignment across all tasks on each core, irrespective of application, where budget constraints on each application ensure isolation. These properties enable us to obtain a utilization bound that places an allowable limit on total task execution times. Evaluation results demonstrate the advantages of our scheduling model over competing resource partitioning approaches, such as Periodic Server and TDMA.Ope

    Optimal Selection of Preemption Points to Minimize Preemption Overhead

    Get PDF
    A central issue for verifying the schedulability of hard real-time systems is the correct evaluation of task execution times. These values are significantly influenced by the preemption overhead, which mainly includes the cache related delays and the context switch times introduced by each preemption. Since such an overhead significantly depends on the particular point in the code where preemption takes place, this paper proposes a method for placing suitable preemption points in each task in order to maximize the chances of finding a schedulable solution. In a previous work, we presented a method for the optimal selection of preemption points under the restrictive assumption of a fixed preemption cost, identical for each preemption point. In this paper, we remove such an assumption, exploring a more realistic and complex scenario where the preemption cost varies throughout the task code. Instead of modeling the problem with an integer programming formulation, with exponential worst-case complexity, we derive an optimal algorithm that has a linear time and space complexity. This somewhat surprising result allows selecting the best preemption points even in complex scenarios with a large number of potential preemption locations. Experimental results are also presented to show the effectiveness of the proposed approach in increasing the system schedulability

    Effective And Efficient Preemption Placement For Cache Overhead Minimization In Hard Real-Time Systems

    Get PDF
    Schedulability analysis for real-time systems has been the subject of prominent research over the past several decades. One of the key foundations of schedulability analysis is an accurate worst case execution time (WCET) for each task. In preemption based real-time systems, the CRPD can represent a significant component (up to 44% as documented in research literature) of variability to overall task WCET. Several methods have been employed to calculate CRPD with significant levels of pessimism that may result in a task set erroneously declared as non-schedulable. Furthermore, they do not take into account that CRPD cost is inherently a function of where preemptions actually occur. Our approach for computing CRPD via loaded cache blocks (LCBs) is more accurate in the sense that cache state reflects which cache blocks and the specific program locations where they are reloaded. Limited preemption models attempt to minimize preemption overhead (CRPD) by reducing the number of allowed preemptions and/or allowing preemption at program locations where the CRPD effect is minimized. These algorithms rely heavily on accurate CRPD measurements or estimation models in order to identify an optimal set of preemption points. Our approach improves the effectiveness of limited optimal preemption point placement algorithms by calculating the LCBs for each pair of adjacent preemptions to more accurately model task WCET and maximize schedulability as compared to existing preemption point placement approaches. We utilize dynamic programming technique to develop an optimal preemption point placement algorithm. Lastly, we will demonstrate, using a case study, improved task set schedulability and optimal preemption point placement via our new LCB characterization. We propose a new CRPD metric, called loaded cache blocks (LCB) which accurately characterizes the CRPD a real-time task may be subjected to due to the preemptive execution of higher priority tasks. We show how to integrate our new LCB metric into our newly developed algorithms that automatically place preemption points supporting linear control flow graphs (CFGs) for limited preemption scheduling applications. We extend the derivation of loaded cache blocks (LCB), that was proposed for linear control flow graphs (CFGs) to conditional CFGs. We show how to integrate our revised LCB metric into our newly developed algorithms that automatically place preemption points supporting conditional control flow graphs (CFGs) for limited preemption scheduling applications. For future work, we will verify the correctness of our framework through other measurable physical and hardware constraints. Also, we plan to complete our work on developing a generalized framework that can be seamlessly integrated into real-time schedulability analysis

    Fixed priority scheduling with pre-emption thresholds and cache-related pre-emption delays: integrated analysis and evaluation

    Get PDF
    Commercial off-the-shelf programmable platforms for real-time systems typically contain a cache to bridge the gap between the processor speed and main memory speed. Because cache-related pre-emption delays (CRPD) can have a significant influence on the computation times of tasks, CRPD have been integrated in the response time analysis for fixed-priority pre-emptive scheduling (FPPS). This paper presents CRPD aware response-time analysis of sporadic tasks with arbitrary deadlines for fixed-priority pre-emption threshold scheduling (FPTS), generalizing earlier work. The analysis is complemented by an optimal (pre-emption) threshold assignment algorithm, assuming the priorities of tasks are given. We further improve upon these results by presenting an algorithm that searches for a layout of tasks in memory that makes a task set schedulable. The paper includes an extensive comparative evaluation of the schedulability ratios of FPPS and FPTS, taking CRPD into account. The practical relevance of our work stems from FPTS support in AUTOSAR, a standardized development model for the automotive industry
    • …
    corecore