4 research outputs found

    Mixed-Criticality Scheduling with Dynamic Redistribution of Shared Cache

    Get PDF
    The design of mixed-criticality systems often involves painful tradeoffs between safety guarantees and performance. However, the use of more detailed architectural models in the design and analysis of scheduling arrangements for mixed-criticality systems can provide greater confidence in the analysis, but also opportunities for better performance. Motivated by this view, we propose an extension of Vestal\u27s model for mixed-criticality multicore systems that (i) accounts for the per-task partitioning of the last-level cache and (ii) supports the dynamic reassignment, for better schedulability, of cache portions initially reserved for lower-criticality tasks to the higher-criticality tasks, when the system switches to high-criticality mode. To this model, we apply partitioned EDF scheduling with Ekberg and Yi\u27s deadline-scaling technique. Our schedulability analysis and scalefactor calculation is cognisant of the cache resources assigned to each task, by using WCET estimates that take into account these resources. It is hence able to leverage the dynamic reconfiguration of the cache partitioning, at mode change, for better performance, in terms of provable schedulability. We also propose heuristics for partitioning the cache in low- and high-criticality mode, that promote schedulability. Our experiments with synthetic task sets, indicate tangible improvements in schedulability compared to a baseline cache-aware arrangement where there is no redistribution of cache resources from low- to high-criticality tasks in the event of a mode change

    Mixed-Criticality Systems with Partial Lockdown and Cache Reclamation Upon Mode Change

    Get PDF
    29th Euromicro Conference on Real-Time Systems (ECRTS 2017), WiP. Dubrovnik, Croatia.In mixed-criticality multicore systems, the appropriate degree of isolation between applications of different criticalities is a primary objective. However, efficient utilization of the platform’s processing capacity and other resources is still desirable and important. In recent work, we, therefore, proposed an approach that reclaims cache resources assigned to low-criticality tasks when these are dispensed with, in the event of a system mode change. The reclaimed cache resources are reassigned from the lower-criticality tasks to the remaining higher-criticality tasks to improve performance. The per-task cache partitions can either be configured to hold frequently accessed (“hot”) pages, locked in place, or they can be used dynamically, with cache lines moved in and out. The first option simplifies WCET analysis while the second option simplifies the act of cache reconfiguration at runtime. Meanwhile, the performance implications of the two options are not immediately obvious. Therefore, in this work-in-progress, we explore an arrangement that combines both approaches, in order to achieve the best tradeoff between efficient analysis, low reconfiguration overheads and good schedulability Simple per task cache partitions (without page locking) are to be used for the portion of the cache that is subject to reclamation. At mode switch, the high-criticality tasks keep the pages they had locked in the cache and get additional partitions, out of reclaimed cache, to bring other pages in and out as needed.info:eu-repo/semantics/publishedVersio

    Decoupling Criticality and Importance in Mixed-Criticality Scheduling

    Get PDF
    Research on mixed-criticality scheduling has flourished since Vestal’s seminal 2007 paper, but more efforts are needed in order to make these results more suitable for industrial adoption and robust and versatile enough to influence the evolution of future certification standards in keeping up with the times. With this in mind, we introduce a more refined task model, in line with the fundamental principles of Vestal’s mode-based adaptive mixed-criticality model, which allows a task’s criticality and its importance to be specified independently from each other. A task’s importance is the criterion that determines its presence in different system modes. Meanwhile, the task’s criticality (reflected in its Safety Integrity Level (SIL) and defining the rules for its software development process), prescribes the degree of conservativeness for the task’s estimated WCET during schedulability testing. We indicate how such a task model can help resolve some of the perceived weaknesses of the Vestal model, in terms of how it is interpreted, and demonstrate how the existing scheduling tests for the classic variant’s of Vestal’s model can be mapped to the new task model essentially without changes.info:eu-repo/semantics/publishedVersio

    Holistic resource allocation for multicore real-time systems

    Get PDF
    This paper presents CaM, a holistic cache and memory bandwidth resource allocation strategy for multicore real-time systems. CaM is designed for partitioned scheduling, where tasks are mapped onto cores, and the shared cache and memory bandwidth resources are partitioned among cores to reduce resource interferences due to concurrent accesses. Based on our extension of LITMUSRT with Intel’s Cache Allocation Technology and MemGuard, we present an experimental evaluation of the relationship between the allocation of cache and memory bandwidth resources and a task’s WCET. Our resource allocation strategy exploits this relationship to map tasks onto cores, and to compute the resource allocation for each core. By grouping tasks with similar characteristics (in terms of resource demands) to the same core, it enables tasks on each core to fully utilize the assigned resources. In addition, based on the tasks’ execution time behaviors with respect to their assigned resources, we can determine a desirable allocation that maximizes schedulability under resource constraints. Extensive evaluations using real-world benchmarks show that CaM offers near optimal schedulability performance while being highly efficient, and that it substantially outperforms existing solutions
    corecore