44 research outputs found

    ResilienceP Analysis: Bounding Cache Persistence Reload Overhead for Set-Associative Caches

    Get PDF
    3rd Doctoral Congress in Engineering will be held at FEUP on the 27th to 28th of June, 2019This work presents different approaches to calculate CPRO for set-associative caches. The PCB-ECB approach uses PCBs of the task under analysis and ECBs of all other tasks in the system to provide sound estimates of CPRO for set-associative caches. The resilienceP analysis then removes some of the pessimism in the PCB-ECB approach by considering the resilience of PCBs during CPRO calculations. We show that using the state-of-the-art (SoA) resilience analysis to calculate resilience of PCBs may result in underestimating the CPRO tasks may suffer. Finally, we have also presented a multi-set alike resilienceP analysis that highlights the pessimism in the resilienceP analysis and provides some insights on how it can be removed.info:eu-repo/semantics/publishedVersio

    Towards Timing Analysis of Multi-core Platforms for Hard Real-Time Systems

    Get PDF
    CPS Student Forum Portugal was held as part of the Cyber-Physical Systems Week (CPS Week 2018), 10-13 April, Porto-Portugal.We intend to provide solutions that can be used to quantify and analyze the non-determinism arising from the sharing of two main resources in MCPs, i.e., caches and interconnects. • Accurately quantify the cache related contention in single core platforms. • Bounding the interference due to cache hierarchy and last-level shared cache (LLC) in multicore platforms. • Model the inter-core interference due to the sharing of Bus/interconnects in a MCP. • Develop a new timing analysis taking into account the interference caused by both caches and interconnects and their impact on the timing properties of tasks running on MCPsinfo:eu-repo/semantics/publishedVersio

    ResilienceP Analysis: Bounding Cache Persistence Reload Overhead for Set-Associative Caches

    Get PDF
    This work presents different approaches to calculate CPRO for set-associative caches. The PCB-ECB approach uses PCBs of the task under analysis and ECBs of all other tasks in the system to provide sound estimates of CPRO for set-associative caches. The resilienceP analysis then removes some of the pessimism in the PCB-ECB approach by considering the resilience of PCBs during CPRO calculations. We show that using the state-of-the-art (SoA) resilience analysis to calculate resilience of PCBs may result in underestimating the CPRO tasks may suffer. Finally, we have also presented a multi-set alike resilienceP analysis that highlights the pessimism in the resilienceP analysis and provides some insights on how it can be removed.info:eu-repo/semantics/publishedVersio

    211009

    Get PDF
    Tasks running on microprocessors with cache memories are often subjected to cache related preemption delays (CRPDs). CRPDs may significantly increase task execution times, thereby, affecting their schedulability. Schedulability analysis accounting for the impact of CRPD has been extensively studied over the past two decades for systems with a single level of cache. Yet, the literature on CRPD for multilevel non-inclusive caches is relatively scarce. Two main challenges exist when analyzing multilevel caches: (1) characterization of the indirect effect of preemption, i.e., capturing the increase in cache interference at lower cache levels (e.g., L2 cache) due to the evictions of cache content from a higher cache level (e.g., L1 cache), and (2) upper bounding the maximum CRPD suffered by tasks at lower cache levels (e.g., L2 cache), i.e., determining the cache content of tasks that can be evicted from lower cache levels in case of preemptions. Existing analysis that focus on bounding CRPD for multilevel non-inclusive caches overestimate the values of (1) and (2) leading to pessimistic worst-case response time (WCRT) estimations. In this work, we reduce the excessive pessimism of the state-of-the-art CRPD analysis for multilevel non-inclusive caches by (i) introducing the notion of multi-level useful cache blocks, i.e., cache blocks that can cause CRPD at different cache levels, and use it to compute a tighter bound on the indirect effect of preemption of tasks; and (ii) deriving a new analysis to compute tighter bounds on the CRPD of tasks at lower cache levels (e.g., L2 cache). We performed a thorough experimental evaluation using benchmarks to compare the performance of our proposed CRPD analysis against the state-of-the-art CRPD analysis. Experimental results show that our proposed CRPD analysis dominates the existing analysis and improves task set schedulability by up to 20% percentage pointsThis work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); also by the Operational Competitiveness Programme and Internationalization (COMPETE 2020) under the PT2020 Partnership Agreement, through the European Regional Development Fund (ERDF), and by national funds through the FCT, within project PREFECT (POCI-01-0145-FEDER-029119); also by the European Union’s Horizon 2020 - The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505. Project ”TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01- 0145-FEDER000020” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement.info:eu-repo/semantics/publishedVersio

    220608

    Get PDF
    The sharing of main memory among concurrently executing tasks on a multicore platform results in increasing the execution times of those tasks in a non-deterministic manner. The use of phased execution models that divide the execution of tasks into distinct memory and execution phase(s), e.g., the PRedictable Execution Model (PREM) and the 3-Phase task model, along with Memory Centric Scheduling (MCS) present a promising solution to reduce main memory interference among tasks. Existing works in the state-of-the-art that focus on MCS have considered (i) a TDMA based memory scheduler, i.e., tasks' memory requests are served under a static TDMA schedule, and (ii) Processor-Priority (PP) based memory scheduler, i.e., tasks' memory requests are served depending on the priority of the processor/core on which the task is executing. This paper extends MCS by considering a Task-Priority (TP) based memory scheduler, i.e., tasks' memory requests are served under a global priority order depending on the priority of the task that issues the requests. We present an analysis to bound the total memory interference that can be suffered by the tasks under the TP-based MCS. In contrast to most existing works on MCS that consider non-preemptive tasks, our analysis considers limited preemptive scheduling. Additionally, we investigate the impact of different preemption points on the memory interference of tasks. Experimental results show that our proposed TP-based MCS can significantly reduce memory interference that can be suffered by the tasks in comparison to the PP-based MCS approach.This work was partially supported by European Union’s Horizon 2020 -The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505. Project ”TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER000020” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement; also by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); by FCT and the Portuguese National Innovation Agency (ANI), under the CMU Portugal partnership, through the European Regional Development Fund (ERDF) of the Operational Competitiveness Programme and Internationalization (COMPETE 2020), under the PT2020 Partnership Agreement, within project FLOYD (POCI-01-0247-FEDER-045912), also by FCT under PhD grant 2020.09532.BD.info:eu-repo/semantics/publishedVersio

    230503

    Get PDF
    In multiprocessor-based real-time systems, the main memory is identified as the main source of shared resource contention. Phased execution models such as the 3-phase task execution model has shown to be a good candidate to tackle the memory contention problem. It divides the execution of tasks into computation and memory phases that enable a fine-grained memory contention analysis. However, the existing work that focuses on the memory contention analysis for 3-phase tasks can overestimate the memory contention that can be suffered by the task under analysis due to the write requests. This overestimation can yield pessimistic bounds on the memory access times and memory contention suffered by tasks which in turn lead to pessimistic worst-case response time (WCRT) bounds. Considering the limitation of the state-of-the-art, this work proposes an improved memory contention analysis for the 3-phase task model. Specifically, we propose a memory contention analysis for the 3-phase task model by tightly bounding the memory contention suffered by the task under analysis due to the write requests. The proposed memory contention analysis integrates memory address mapping of tasks to improve the bounds on the maximum memory contention suffered by tasks.This work was nanced by FCT and EU ECSEL JU within project ADACORSA (ECSEL/0010/2019 - JU grant nr. 876019) - The JU receives support from the EU’s Horizon 2020 R&I Programme and Germany, Netherlands, Austria, France, Sweden, Cyprus, Greece, Lithuania, Portugal, Italy, Finland, Turkey (Disclaimer: This document re ects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains); it is also a result of the work developed under the CISTER Unit (UIDP/UIDB/04234/2020), nanced by FCT/MCTES (Portuguese Foundation for Science and Technology); and under project POCI-01-0247-FEDER-045912 (FLOYD), nanced in the scope of the CMU Portugal, by the European Regional Development Fund (ERDF) under COMPETE 2020, also by FCT under PhD grant 2020.09532.BD.info:eu-repo/semantics/publishedVersio

    211004

    Get PDF
    Today multicore processors are used in most modern systems that require computational logic. However, their applicability in systems with stringent timing requirements is still an ongoing research. This is due to the difficulty of ensuring the timing correctness of tasks executing on a multicore platform that comprises a number of shared hardware resources, e.g., caches, memory bus and the main memory. Concurrent accesses to any of these shared resources can generate uncontrolled interference, which complicates the estimations of tasks' worst-case execution time (WCET) and the worst-case response time (WCRT). The use of the 3-phase task execution model helps in upper bounding the contention due to the sharing of bus/main memory in multicore systems. It divides the execution of tasks into distinct memory and execution phases, where tasks can only access the bus/main memory during their memory phases. This makes bus/memory access patterns of tasks more predictable, enabling a preciser computation of bus/memory contention. In this work, we show how the bus contention can be computed for the 3-phase task model considering a work-conserving, i.e., round-robin (RR) based, arbitration policy at the memory bus. This is different from existing works that analyze the time-division multiple access (TDMA) and first-come-first-serve (FCFS) based bus arbitration policies. First, we present a solution to model the bus contention that can be suffered/caused by tasks executing on the same/remote cores of a multicore system under an RR-based bus arbitration scheme. We then evaluate the impact of resulting bus contention on taskset schedulability. Experimental results show that our proposed RR-based bus contention analysis can improve taskset schedulability by up to 100 percentage points than the TDMA-based analysis and up to 40 percentage points than the FCFS-based bus contention analysis.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDB-UIDP/04234/2020); also by the Operational Competitiveness Programme and Internationalization (COMPETE 2020) under the PT2020 Partnership Agreement, through the European Regional Development Fund (ERDF), and by national funds through the FCT, within project POCI-01-0145-FEDER-029119 (PREFECT); also by the European Union’s Horizon 2020 - The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER000020” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement; also by FCT, under PhD grant 2020.09532.BD.info:eu-repo/semantics/publishedVersio

    220801

    Get PDF
    Multicore platforms are being increasingly adopted in Cyber-Physical Systems (CPS) due to their advantages over single-core processors, such as raw computing power and energy efficiency. Typically, multicore platforms use a shared memory bus that connects the cores to the off-chip main memory. This sharing of memory bus may cause tasks running on different cores to compete for access to the main memory whenever data/instructions are need to be read/written from/to the main memory. Such competition is problematic, as it may cause variations in the execution time of tasks in a non-deterministic way. To reduce the complexity of analysing this problem, the 3-phase task model was proposed that divides tasks' executions into distinct memory and execution phases. The distinctive memory phases are then scheduled to eliminate/minimize main memory contention between concurrently executing tasks. However, 3-phase tasks running on different cores may still compete to access the shared memory bus/main memory in order to execute memory phases. This paper presents a partitioned scheduling-based approach that allows one to derive memory bus contention-aware worst-case response time of tasks that follow the 3-phase task model. In particular, the bus-contention analysis is derived by considering two memory access models, i.e., (i) dedicated memory access model, where a core having allowed to access the main memory via memory bus is permitted to execute more than one memory phase, and (ii) fair memory access model, that restrict each core to execute only one memory phase in its allocated bus access. Both these models represent different system and application requirements, and the resulting bus contention of tasks may vary depending on the considered model. To evaluate the effectiveness of the proposed bus contention analysis, we compare its performance against an existing analysis in the state-of-the-art by performing (i) case-study experiments, using benchmarks from the Mälardalen Benchmark suite, and (ii) empirical evaluation using synthetic task sets. Results show that our proposed analysis can improve task set schedulability of 3-phase tasks by up to 88 percentage points.This work was partially supported by European Union’s Horizon 2020 -The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER000020” 845 financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement; also by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); by FCT and the Portuguese National Innovation Agency (ANI), under the CMU Portugal partnership, through the European Regional Development Fund (ERDF) of the Operational Competitiveness Programme and Internationaliza850 tion (COMPETE 2020), under the PT2020 Partnership Agreement, within project FLOYD (POCI-01-0247- FEDER-045912), also by FCT under PhD grant 2020.09532.BD.info:eu-repo/semantics/publishedVersio

    220101

    Get PDF
    The Predictable Execution Model (PREM) is useful for mitigating inter-core interference due to shared resources such as the main memory. However, it is cache-agnostic, which makes schedulabulity analysis pessimistic, via overestimation of prefetches and write-backs. In response, we present cache-aware schedulability analysis for PREM tasks on fixed-task-priority partitioned multicores, that bounds the number of cache prefetches and write-backs. Our approach identifies memory blocks loaded in the execution of a previous scheduling interval of each task, that remain in the cache until its next scheduling interval. Doing so, greatly reduces the estimated prefetches and write backs. In experimental evaluations, our analysis improves the schedulability of PREM tasks by up to 55 percentage points.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); also by the Operational Competitiveness Programme and Internationalization (COMPETE 2020) under the PT2020 Partnership Agreement, through the European Regional Development Fund (ERDF), and by national funds through the FCT, within project PREFECT (POCI-01-0145-FEDER-029119); also by the European Union’s Horizon 2020 - The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505. Project ”TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145- FEDER000020” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreementinfo:eu-repo/semantics/publishedVersio
    corecore