14 research outputs found

    On How to Identify Cache Coherence: Case of the NXP QorIQ T4240

    Get PDF
    Architectures used in safety critical systems have to pass certain certification standards, which require sufficient proof that they will behave as expected. Multi-core processors make this challenging by featuring complex interactions between the tasks they run. A lot of these interactions are made without explicit instructions from the program designers. Furthermore, they can have strong negative impacts on performance (and potentially affect correctness). One important such source of interactions is cache coherence, which speeds up operations in most cases, but can also lead to unexpected variations in execution time if not fully understood. Architecture documentations often lack details on the implementation of cache coherence. We thus propose a strategy to ascertain that the platform does indeed implement the cache coherence protocol its user believes it to. We also apply this strategy to the NXP QorIQ T4240, resulting in the identification of a protocol (MESIF) other than the one this architecture’s documentation led us to believe it was using (MESI)

    Impact of DM-LRU on WCET: A Static Analysis Approach

    Get PDF
    Cache memories in modern embedded processors are known to improve average memory access performance. Unfortunately, they are also known to represent a major source of unpredictability for hard real-time workload. One of the main limitations of typical caches is that content selection and replacement is entirely performed in hardware. As such, it is hard to control the cache behavior in software to favor caching of blocks that are known to have an impact on an application\u27s worst-case execution time (WCET). In this paper, we consider a cache replacement policy, namely DM-LRU, that allows system designers to prioritize caching of memory blocks that are known to have an important impact on an application\u27s WCET. Considering a single-core, single-level cache hierarchy, we describe an abstract interpretation-based timing analysis for DM-LRU. We implement the proposed analysis in a self-contained toolkit and study its qualitative properties on a set of representative benchmarks. Apart from being useful to compute the WCET when DM-LRU or similar policies are used, the proposed analysis can allow designers to perform WCET impact-aware selection of content to be retained in cache

    WCET Analysis with MRU Caches: Challenging LRU for Predictability

    Full text link

    A Survey on Cache Management Mechanisms for Real-Time Embedded Systems

    Get PDF
    © ACM, 2015. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Computing Surveys, {48, 2, (November 2015)} http://doi.acm.org/10.1145/2830555Multicore processors are being extensively used by real-time systems, mainly because of their demand for increased computing power. However, multicore processors have shared resources that affect the predictability of real-time systems, which is the key to correctly estimate the worst-case execution time of tasks. One of the main factors for unpredictability in a multicore processor is the cache memory hierarchy. Recently, many research works have proposed different techniques to deal with caches in multicore processors in the context of real-time systems. Nevertheless, a review and categorization of these techniques is still an open topic and would be very useful for the real-time community. In this article, we present a survey of cache management techniques for real-time embedded systems, from the first studies of the field in 1990 up to the latest research published in 2014. We categorize the main research works and provide a detailed comparison in terms of similarities and differences. We also identify key challenges and discuss future research directions.King Saud University NSER

    On the analysis of random replacement caches using static probabilistic timing methods for multi-path programs

    Get PDF
    Probabilistic hard real-time systems, based on hardware architectures that use a random replacement cache, provide a potential means of reducing the hardware over-provision required to accommodate pathological scenarios and the associated extremely rare, but excessively long, worst-case execution times that can occur in deterministic systems. Timing analysis for probabilistic hard real-time systems requires the provision of probabilistic worst-case execution time (pWCET) estimates. The pWCET distribution can be described as an exceedance function which gives an upper bound on the probability that the execution time of a task will exceed any given execution time budget on any particular run. This paper introduces a more effective static probabilistic timing analysis (SPTA) for multi-path programs. The analysis estimates the temporal contribution of an evict-on-miss, random replacement cache to the pWCET distribution of multi-path programs. The analysis uses a conservative join function that provides a proper over-approximation of the possible cache contents and the pWCET distribution on path convergence, irrespective of the actual path followed during execution. Simple program transformations are introduced that reduce the impact of path indeterminism while ensuring sound pWCET estimates. Evaluation shows that the proposed method is efficient at capturing locality in the cache, and substantially outperforms the only prior approach to SPTA for multi-path programs based on path merging. The evaluation results show incomparability with analysis for an equivalent deterministic system using an LRU cache. For some benchmarks the performance of LRU is better, while for others, the new analysis techniques show that random replacement has provably better performance

    Analysis of cache usability on modern real-time systems

    Get PDF
    Cache memories are used in the microprocessors to close the speed gap between the processor and the main memory. Caches can minimize the memory access time by keeping a copy of the highly demanded data closer to the processor. As a result, the overall program execution time is reduced. In safety-critical real-time systems, a worst-case analysis is required, and therefore the cache memories play an essential role in the estimation of the application's worst-case execution time. A simulation tool for the cache structure was developed to provide estimated measurements for both cache predictability and the worst-case memory access time based on the used architectural model. This may help to draw some conclusions about the actual cache operation. The simulation supports several modern uni-core and multi-core architectures, including some used in real-time systems. It also allows configuring different cache structures and hierarchies. The cache architecture, configuration and memory accesses from a simulated running application are specified by the user via an input file. The simulation provides a list of traces for every access. The cache predictability can be formulated as hit and miss rates. At the same time, the traces can be used to estimate total memory access time

    Architecture multi-coeurs et temps d'exécution au pire cas

    Get PDF
    Les tâches critiques en systèmes temps-réel sont soumises à des contraintes temporelles et de correction. La validation d'un tel système repose sur l'estimation du comportement temporel au pire cas de ses tâches. Le partage de ressources, inhérent aux architectures multi-cœurs, entrave le calcul de ces estimations. Le comportement temporel d'une tâche dépend de ses rivales du fait de l'arbitrage de l'accès aux ressources ou de modifications concurrentes de leur état. Cette étude vise à l'estimation de la contribution temporelle de la hiérarchie mémoire au pire temps d'exécution de tâches critiques. Les méthodes existantes, pour caches d'instructions, sont étendues afin de supporter caches de données privés et partagés, et permettre l'analyse de hiérarchies mémoires riches. Le court-circuitage de cache est ensuite utilisé pour réduire la pression sur les caches partagés. Nous proposons à cette fin différentes heuristiques basées sur la capture de la réutilisation de blocs de cache entre différents accès mémoire. Notre seconde proposition est la politique de partitionnement Preti qui permet l'allocation d'un espace sans conflits à une tâche. Preti favorise aussi les performances de tâches non critiques concurrentes aux temps-réel dans les systèmes de criticité hybride.Critical tasks in the context of real-time systems submit to both timing and correctness constraints. Whence, the validation of a real-time system rely on the estimation of its tasks Worst case execution times. Resource sharing, as it occurs on multicore architectures, hinders the computation of such estimates. The timing behaviour of a task is impacted by its concurrents, whether because of resource access arbitration or concurrent modifications of a resource state. This study focuses on estimating the contribution of the memory hierarchy to tasks worst case execution time. Existing analysis methods, defined for instruction caches, are extended to support private and shared data caches, hence allowing for the analysis of rich memory hierarchies. Cache bypass is then used to reduce the pressure laid by concurrent tasks on shared caches levels. We propose different bypass heuristics, based on the capture of cache blocks reuse between memory accesses. Our second proposal is the Preti partitioning scheme which allows for the allocation to tasks of a cache space, free from inter-task conflicts. Preti offers the added benefit of providing for average-case performance to non-critical tasks concurrent to real-time ones on hybrid criticality systems.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF

    Analysis of preemptively scheduled hard real-time systems

    Get PDF
    As timing is a major property of hard real-time, proving timing correctness is of utter importance. A static timing analysis derives upper bounds on the execution time of tasks, a scheduling analysis uses these bounds and checks if each task meets its timing constraints. In preemptively scheduled systems with caches, this interface between timing analysis and scheduling analysis must be considered outdated. On a context switch, a preempting task may evict cached data of a preempted task that need to be reloaded again after preemption. The additional execution time due to these reloads, called cache-related preemption delay (CRPD), may substantially prolong a task\u27s execution time and strongly influence the system\u27s performance. In this thesis, we present a formal definition of the cache-related preemption delay and determine the applicability and the limitations of a separate CRPD computation. To bound the CRPD based on the analysis of the preempted task, we introduce the concept of definitely cached useful cache blocks. This new concept eliminates substantial pessimism with respect to former analyses by considering the over-approximation of a preceding timing analysis. We consider the impact of the preempting task to further refine the CRPD bounds. To this end, we present the notion of resilience. The resilience of a cache block is a measure for the amount of disturbance of a preempting task a cache block of the preempted task may survive. Based on these CRPD bounds, we show how to correctly account for the CRPD in the schedulability analysis for fixed-priority preemptive systems and present new CRPD-aware response time analyses: ECB-Union and Multiset approaches.Da das Zeitverhalten ein Hauptbestandteil harter Echtzeitsysteme ist, ist das Beweisen der zeitlichen Korrektheit von großer Bedeutung. Eine statische Zeitanalyse berechnet obere Schranken der Ausführungszeiten von Programmen, eine Planbarkeitsanalyse benutzt diese und prüft ob jedes Programm die Zeitanforderungen erfüllt. In präemptiv geplanten Systemen mit Caches, muss die Nahtstelle zwischen Zeitanalyse und Planbarkeitsanalyse als veraltet angesehen werden. Im Falle eines Kontextwechsels kann das unterbrechende Programm Cache-daten des unterbrochenen Programms entfernen. Diese Daten müssen nach der Unterbrechung erneut geladen werden. Die zusätzliche Ausführungszeit durch das Nachladen der Daten, welche Cache-bezogene Präemptions-Verzögerung (engl. Cache-related Preemption Delay (CR-PD)) genannt wird, kann die Ausführungszeit des Programm wesentlich erhöhen und hat somit einen starken Einfluss auf die Gesamtleistung des Systems. Wir präsentieren in dieser Arbeit eine formale Definition der Cache-bezogene Präemptions-Verzögerung und bestimmen die Einschränkungen und die Anwendbarkeit einer separaten Berechnung der CRPD. Basierend auf der Analyse des unterbrochenen Programms präsentieren wir das Konzept der definitiv gecachten nützlichen Cacheblöcke. Verglichen mit bisherigen CRPD-Analysen eleminiert dieses neue Konzept wesentliche Überschätzung indem die Überschätzung der vorherigen Zeitanalyse mit in Betracht gezogen wird. Wir analysieren den Einfluss des unterbrechenden Programms um die CRPD-Schranken weiter zu verbessern. Hierzu führen wir das Konzept der Belastbarkeit ein. Die Belastbarkeit eines Cacheblocks ist ein Maß für die Störung durch das unterbrechende Programm, die ein nützlicher Cacheblock überleben kann. Basierend auf diesen CRPD-Schranken zeigen wir, wie die Cache-bezogene Präemptions-Verzögerung korrekt in die Planbarkeitsanalyse für Systeme mit statischen Prioritäten integriert werden kann und präsentieren neue CRPD-bewußte Antwortzeitanalysen: die ECB-Union und die Multimengen-Ansätze

    Cache persistence analysis for embedded real-time systems

    Get PDF
    To compute a worst-case execution time (WCET) estimate for a program running on a safety-critical hard real-time system, the effects of the architecture of the underlying hardware have to be modeled. The classical cache analysis distinguishes three categories for memory references to cached memory: always-hit, always-miss and not-classified. The cache persistence analysis tries to classify memory references as persistent thereby improving the classical cache analysis by limiting the number of misses for not-classified memory references. We present several new abstract interpretation based cache persistence analyses. Two are based on the concept of conflict counting, one on the may cache analysis, and one combines both concepts. All analyses also fix a correctness issue of the original cache persistence analysis by Ferdinand and Wilhelm. For non-fully-timing-compositional architectures using the persistence information is not straightforward. A novel path analysis enables the use of persistence information also for state-of-the-art architectures that exhibit timing anomalies / domino effects. The new analyses are practically evaluated within the industrially used WCET analyzer aiT on a series of standard benchmark programs and a series of real avionic examples.Um eine obere Schranke für die Laufzeit eines Programms (WCET) auf einem sicherheitskritischen harten Echtzeit-System zu berechnen, müssen die Effekte der Architektur der zugrunde liegenden Hardware modelliert werden. Die klassische Cache-Analyse unterscheidet drei Kategorien für Speicherreferenzen: always-hit, always-miss und not-classified. Die Cache-Persistenz-Analyse versucht, die klassische Cache-Analyse zu verbessern, in dem sie not-classified Speicherreferenzen als persistent klassifiziert und damit die Zahl der möglichen Cache-Fehlzugriffe beschränkt. Wir stellen mehrere neuartige auf abstrakter Interpretation basierende Cache-Persistenz-Analysen vor. Zwei basieren auf dem Konzept des Zählens von Konflikten, eine auf der May-Cache Analyse und die letzte kombiniert beide Ansätze miteinander. Alle Analysen korrigieren auch einen Fehler in der ursprünglichen Cache-Persistenz-Analyse von Ferdinand und Wilhelm. Für non-fully-timing-compositional Architekturen ist die Persistenz nicht einfach zu benutzen. Eine neue Pfadanalyse erlaubt die Benutzung der Persistenz auch für aktuelle Architekturen, bei denen sowohl Timing-Anomalien als auch Domino-Effekte auftreten können. Die vorgestellten Analysen werden innerhalb des industriell verwendeten WCET-Analysators aiT auf einer Reihe von Standard-Benchmark-Programmen und realen Avionic-Anwendungen evaluiert

    Timing Predictable and High-Performance Hardware Cache Coherence Mechanisms for Real-Time Multi-Core Platforms

    Get PDF
    Multi-core platforms are becoming primary compute platforms for real-time systems such as avionics and autonomous vehicles. This adoption is primarily driven by the increasing application demands deployed in real-time systems, and the cost and performance benefits of multi-core platforms. For real-time applications, satisfying safety properties in the form of timing predictability, is the paramount consideration. Providing such guarantees on safety properties requires applying some timing analysis on the application executing on the compute platform. The timing analysis computes an upper bound on the application’s execution time on the compute platform, which is referred to as the worst-case execution time (WCET). However, multi-core platforms pose challenges that complicate the timing analysis. Among these challenges are timing challenges caused due to simultaneous accesses from multiple cores to shared hardware resources such as shared caches, interconnects, and off-chip memories. Supporting timing predictable shared data communication between real-time applications further compounds this challenge as a core’s access to shared data is dependent on the simultaneous memory activity from other cores on the shared data. Although hardware cache coherence mechanisms are the primary high-performance data communication mechanisms in current multi-core platforms, there has been very little use of these mechanisms to support timing predictable shared data communication in real-time multi-core platforms. Rather, current state-of-the-art approaches to timing predictable shared data communication sidestep hardware cache coherence. These approaches enforce memory and execution constraints on the shared data to simplify the timing analysis at the expense of application performance. This thesis makes the case for timing predictable hardware cache coherence mechanisms as viable shared data communication mechanisms for real-time multi-core platforms. A key takeaway from the contributions in this thesis is that timing predictable hardware cache coherence mechanisms offer significant application performance over prior state-of-the-art data communication approaches while guaranteeing timing predictability. This thesis has three main contributions. First, this thesis shows how a hardware cache coherence mechanism can be designed to be timing predictable by defining design invariants that guarantee timing predictability. We apply these design invariants and design timing predictable variants of existing conventional cache coherence mechanisms. Evaluation of these timing predictable cache coherence mechanisms show that they provide significant application performance over state-of-the-art approaches while delivering timing predictability. Second, we observe that the large worst-case memory access latency under timing predictable hardware cache coherence mechanisms questions their applicability as a data communication mechanism in real-time multi-core platforms. To this end, we present a systematic framework to design better timing predictable cache coherence mechanisms that balance high application performance and low worst-case memory access latency. Our systematic framework concisely captures the design features of timing predictable cache coherence mechanisms that impacts their WCET, and identifies a spectrum of approaches to reduce the worst-case memory access latency. We describe one approach and show that this approach reduces the worst-case memory access latency of timing predictable cache coherence mechanisms to be the same as alternative approaches while trading away minimal performance in the original cache coherence mechanisms. Third, we design a timing predictable hardware cache coherence mechanism for multi-core platforms used in mixed-critical real-time systems (MCS). Applications in MCS have varying performance and timing predictability requirements. We design a timing predictable cache coherence mechanism that considers these differing requirements and ensures that applications with no timing predictability requirements do not impact applications with strict predictability requirements
    corecore