45 research outputs found
Resilient random modulo cache memories for probabilistically-analyzable real-time systems
Fault tolerance has often been assessed separately in safety-related real-time systems, which may lead to inefficient solutions. Recently, Measurement-Based Probabilistic Timing Analysis (MBPTA) has been proposed to estimate Worst-Case Execution Time (WCET) on high performance hardware. The intrinsic probabilistic nature of MBPTA-commpliant hardware matches perfectly with the random nature of hardware faults.
Joint WCET analysis and reliability assessment has been done so far for some MBPTA-compliant designs, but not for the most promising cache design: random modulo. In this paper we perform, for the first time, an assessment of the aging-robustness of random modulo and propose new implementations preserving the key properties of random modulo, a.k.a. low critical path impact, low miss rates and MBPTA compliance, while enhancing reliability in front of aging by achieving a better – yet random – activity distribution across cache sets.Peer ReviewedPostprint (author's final draft
Random Modulo: A new processor cache design for real-time critical systems
Cache memories have a huge impact on software's worst-case execution time (WCET). While enabling the seamless use of caches is key to provide the increasing levels of (guaranteed) performance required by automotive software, caches complicate timing analysis. In the context of Measurement-Based Probabilistic Timing Analysis (MBPTA) - a promising technique to ease timing analyis of complex hardware - we propose Random Modulo (RM), a new cache design that provides the probabilistic behavior required by MBPTA and with the following advantages over existing MBPTA-compliant cache designs: (i) an outstanding reduction in WCET estimates, (ii) lower latency and area overhead, and (iii) competitive average performance w.r.t conventional caches.Peer ReviewedPostprint (author's final draft
pTNoC: Probabilistically time-analyzable tree-based NoC for mixed-criticality systems
The use of networks-on-chip (NoC) in real-time safety-critical multicore systems challenges deriving tight worst-case execution time (WCET) estimates. This is due to the complexities in tightly upper-bounding the contention in the access to the NoC among running tasks. Probabilistic Timing Analysis (PTA) is a powerful approach to derive WCET estimates on relatively complex processors. However, so far it has only been tested on small multicores comprising an on-chip bus as communication means, which intrinsically does not scale to high core counts. In this paper we propose pTNoC, a new tree-based NoC design compatible with PTA requirements and delivering scalability towards medium/large core counts. pTNoC provides tight WCET estimates by means of asymmetric bandwidth guarantees for mixed-criticality systems with negligible impact on average performance. Finally, our implementation results show the reduced area and power costs of the pTNoC.The research leading to these results has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under the PROXIMA Project
(www.proxima-project.eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Mladen Slijepcevic is funded by the Obra Social Fundación la Caixa under grant Doctorado “la Caixa” - Severo Ochoa. Carles
Hern´andez is jointly funded by the Spanish Ministry of Economy and Competitiveness (MINECO) and FEDER funds through grant TIN2014-60404-JIN. Jaume Abella has been
partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft
Probabilistic Worst-Case Timing Analysis: Taxonomy and Comprehensive Survey
"© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Computing Surveys, {VOL 52, ISS 1, (February 2019)} https://dl.acm.org/doi/10.1145/3301283"[EN] The unabated increase in the complexity of the hardware and software components of modern embedded real-time systems has given momentum to a host of research in the use of probabilistic and statistical techniques for timing analysis. In the last few years, that front of investigation has yielded a body of scientific literature vast enough to warrant some comprehensive taxonomy of motivations, strategies of application, and directions of research. This survey addresses this very need, singling out the principal techniques in the state of the art of timing analysis that employ probabilistic reasoning at some level, building a taxonomy of them, discussing their relative merit and limitations, and the relations among them. In addition to offering a comprehensive foundation to savvy probabilistic timing analysis, this article also identifies the key challenges to be addressed to consolidate the scientific soundness and industrial viability of this emerging field.This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772773), and the HiPEAC Network of Excellence. Jaume Abella was partially supported by the Ministry of Economy and Competitiveness under a Ramon y Cajal postdoctoral fellowship (RYC-2013-14717). Enrico Mezzetti has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporación postdoctoral fellowship No. IJCI-2016-27396.Cazorla, FJ.; Kosmidis, L.; Mezzetti, E.; Hernández Luz, C.; Abella, J.; Vardanega, T. (2019). Probabilistic Worst-Case Timing Analysis: Taxonomy and Comprehensive Survey. ACM Computing Surveys. 52(1):1-35. https://doi.org/10.1145/3301283S13552
A Probabilistically Analyzable Cache to Estimate Timing Bounds
RÉSUMÉ - Les architectures informatiques modernes cherchent à accélérer la performance moyenne
des logiciels en cours d’exécution. Les caractéristiques architecturales comme : deep pipelines,
prédiction de branchement, exécution hors ordre, et hiérarchie des mémoire à multiple
niveaux ont un impact négatif sur le logiciel de prédiction temporelle. En particulier, il est
difficile, voire impossible, de faire une estimation précise du pire cas de temps d’exécution
(WCET) d’un programme ou d’un logiciel en cours d’exécution sur une plateforme informatique
particulière. Les systèmes embarqués critiques temps réel (CRTESs), par exemple
les systèmes informatiques dans le domaine aérospatiale, exigent des contraintes de temps
strictes pour garantir leur fonctionnement opérationnel. L’analyse du WCET est l’idée centrale
du développement des systèmes temps réel puisque les systèmes temps réel ont toujours
besoin de respecter leurs échéances. Afin de répondre aux exigences du délai, le WCET des
tâches des systèmes temps réel doivent être déterminées, et cela est seulement possible si
l’architecture informatique est temporellement prévisible. En raison de la nature imprévisible
des systems informatiques modernes, il est peu pratique d’utiliser des systèmes informatiques
avancés dans les CRTESs. En temps réel, les systèmes ne doivent pas répondre aux exigences
de haute performance. Les processeurs conçus pour améliorer la performance des systèmes
informatiques en général peuvent ne pas être compatibles avec les exigences pour les systèmes
temps réel en raison de problèmes de prédictabilité. Les techniques d’analyse temporelle actuelles
sont bien établies, mais nécessitent une connaissance détaillée des opérations internes
et de l’état du système pour le matériel et le logiciel. Le manque de connaissances approfondies
des opérations architecturales devient un obstacle à l’adoption de techniques déterministes
de l’analyse temporelle (DTA) pour mesurer le WCET. Les techniques probabilistes de l’analyse
temporelle (PTA) ont, quant à elles, émergé comme les techniques d’analyse temporelle
pour la prochaine génération de systèmes temps réel. Les techniques PTA réduisent l’étendue
des connaissances nécessaires pour l’exécution d’un logiciel informatique afin d’effectuer
des estimations précises du WCET. Dans cette thèse, nous proposons le développement d’une
nouvelle technique pour un cache probabilistiquement analysable, tout en appliquant les techniques
PTA pour prédire le temps d’exécution d’un logiciel. Dans ce travail, nous avons mis
en place une cache aléatoire pour les processeurs MIPS-32 et Leon-3. Nous avons conçu et mis
en œuvre les politiques de placement et remplacement aléatoire et appliquer des techniques
temporelles probabilistiques pour mesurer le WCET probabiliste (pWCET). Nous avons Ă©galement
mesuré le niveau de pessimisme encouru par les techniques probabilistes et comparé
cela avec la configuration du cache déterministe. La prédiction du WCET fournie par les
techniques PTA est plus proche de la durée d’exécution réelle du programme. Nous avons
comparé les estimations avec les mesures effectuées sur le processeur pour aider le concepteur
à évaluer le niveau de pessimisme introduit par l’architecture du cache pour chaque technique
d’analyse temporelle probabiliste. Ce travail fait une première tentative de comparaison des
analyses temporelles déterministes, statiques et de l’analyse temporelle probabiliste basée sur
des mesures pour l’estimation du temps d’execution sous différentes configurations de cache.
Nous avons identifié les points forts et les limites de chaque technique pour la prévision du
temps d’execution, puis nous avons fourni des directives pour la conception du processeur
qui minimisent le pessimisme associé au WCET. Nos expériences montrent que le cache répond
à toutes les conditions pour PTA et la prévision du programme peut être déterminée
avec une précision arbitraire. Une telle architecture probabiliste offre un potentiel inégalé et
prometteur pour les prochaines générations du CRTESs.
---------- ABSTRACT - Modern computer architectures are targeted towards speeding up the average performance
of software running on it. Architectural features like: deep pipelines, branch prediction, outof-order
execution, and multi-level memory hierarchies have an adverse impact on software
timing prediction. Particularly, it is hard or even impossible to make an accurate estimation
of the worst case execution-time (WCET) of a program or software running on a particular
hardware platform.
Critical real-time embedded systems (CRTESs), e.g. computing systems in aerospace
require strict timing constraints to guarantee their proper operational behavior. WCET
analysis is the central idea of the real-time systems development because real-time systems
always need to meet their deadlines. In order to meet the deadline requirements, WCET of
the real-time systems tasks must be determined, and this is only possible if the hardware
architecture is time-predictable. Due to the unpredictable nature of the modern computing
hardware, it is not practical to use advanced computing systems in CRTESs. The real-time
systems do not need to meet high-performance requirements. The processor designed to
improve average cases performance may not fit the requirements for the real-time systems
due to predictability issues.
Current timing analysis techniques are well established, but require detailed knowledge
of the internal operations and the state of the system for both hardware and software. Lack
of in-depth knowledge of the architectural operations become an obstacle for adopting the
deterministic timing analysis (DTA) techniques for WCET measurement. Probabilistic timing
analysis (PTA) is a technique that emerged for the timing analysis of the next-generation
real-time systems. The PTA techniques reduce the extent of knowledge of a software execution
platform that is needed to perform the accurate WCET estimations. In this thesis,
we propose the development of a new probabilistically analyzable cache and applied PTA
techniques for time-prediction. In this work, we implemented a randomized cache for MIPS-
32 and Leon-3 processors. We designed and implemented random placement and replacement
policies, and applied probabilistic timing techniques to measure probabilistic WCET
(pWCET). We also measured the level of pessimism incurred by the probabilistic techniques
and compared it with the deterministic cache configuration. The WCET prediction provided
by the PTA techniques is closer to the real execution-time of the program. We compared the
estimates with the measurements done on the processor to help the designer to evaluate the
level of pessimism introduced by the cache architecture for each probabilistic timing analysis
technique. This work makes a first attempt towards the comparison of deterministic, static,
and measurement-based probabilistic timing analysis for time-prediction under varying cache
configurations. We identify strengths and limitations of each technique for time- prediction,
and provide guidelines for the design of the processor that minimize the pessimism associated
with WCET. Our experiments show that the cache fulfills all the requirements for PTA and
program prediction can be determined with arbitrary accuracy. Such probabilistic computer
architecture carries unmatched potential and great promise for next generation CRTESs
EPC Enacted: Integration in an Industrial Toolbox and Use against a Railway Application
Measurement-based timing analysis approaches are increasingly making their way into several industrial domains on account of their good cost-benefit ratio. The trustworthiness of those methods, however, suffers from the limitation that their results are only valid for the particular paths and execution conditions that the user is able to explore with the available input vectors. It is generally not possible to guarantee that the collected measurements are fully representative of the worst-case timing behaviour. In the context of measurement-based probabilistic timing analysis, the Extended Path Coverage (EPC) approach has been recently proposed as a means to extend the representativeness of measurement observations, to obtain the same effect of full path coverage. At the time of its first publication, EPC had not reached an implementation maturity that could be trialled industrially. In this work we analyze the practical implications of using EPC with real-world applications, and discuss the challenges in integrating it in an industrial-quality toolchain. We show that we were able to meet EPC requirements and successfully evaluate the technique on a real Railway application, on top of a commercial toolchain and full execution stack.This work has received funding from the European Community’s Seventh Framework Programme [FP7/2007-2013] under
grant agreement 611085 (PROXIMA, www.proxima-project.eu). This work has also been partially supported by the Spanish
Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by
the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. The authors are grateful to Antoine Colin from Rapita Ltd. for his precious support.Peer ReviewedPostprint (author's final draft
Cache side-channel attacks and time-predictability in high-performance critical real-time systems
Embedded computers control an increasing number of systems directly interacting with humans, while also manage more and more personal or sensitive information. As a result, both safety and security are becoming ubiquitous requirements in embedded computers, and automotive is not an exception to that. In this paper we analyze time-predictability (as an example of safety concern) and side-channel attacks (as an example of security issue) in cache memories. While injecting randomization in cache timing-behavior addresses each of those concerns separately, we show that randomization solutions for time-predictability do not protect against side-channel attacks and vice-versa. We then propose a randomization solution to achieve both safety and security goals.This work has been partially funded by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P. Jaume Abella
has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal fellowship number RYC-2013-14717. Authors want to thank Boris Kpf for his technical comments in early versions of this work.Peer ReviewedPostprint (published version
Probabilistic timing analysis on time-randomized platforms for the space domain
Timing Verification is a fundamental step in real-time embedded systems, with measurement-based timing analysis (MBTA) being the most common approach used to that end. We present a Space case study on a real platform that has been modified to support a probabilistic variant of MBTA called MBPTA. Our platform provides the properties required by MBPTA with the predicted WCET estimates with MBPTA being competitive to those with current MBTA practice while providing more solid evidence on their correctness for certification.The research leading to these results has received funding from the European Community’s FP7 [FP7/2007-2013] under
the PROXIMA Project (www.proxima-project.eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation
under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon
y Cajal postdoctoral fellowship number RYC-2013-14717.
Carles Hernandez is jointly funded by the Spanish Ministry of Economy and Competitiveness and FEDER funds through
grant TIN2014-60404-JIN.Peer ReviewedPostprint (author's final draft
Dynamic software randomisation: Lessons learnec from an aerospace case study
Timing Validation and Verification (V&V) is an important step in real-time system design, in which a system's timing behaviour is assessed via Worst Case Execution Time (WCET) estimation and scheduling analysis. For WCET estimation, measurement-based timing analysis (MBTA) techniques are widely-used and well-established in industrial environments. However, the advent of complex processors makes it more difficult for the user to provide evidence that the software is tested under stress conditions representative of those at system operation. Measurement-Based Probabilistic Timing Analysis (MBPTA) is a variant of MBTA followed by the PROXIMA European Project that facilitates formulating this representativeness argument. MBPTA requires certain properties to be applicable, which can be obtained by selectively injecting randomisation in platform's timing behaviour via hardware or software means. In this paper, we assess the effectiveness of the PROXIMA's dynamic software randomisation (DSR) with a space industrial case study executed on a real unmodified hardware platform and an industrial operating system. We present the challenges faced in its development, in order to achieve MBPTA compliance and the lessons learned from this process. Our results, obtained using a commercial timing analysis tool, indicate that DSR does not impact the average performance of the application, while it enables the use of MBPTA. This results in tighter pWCET estimates compared to current industrial practice.The research leading to these results has received funding from the European Community’s FP7 [FP7/2007-2013] under
the PROXIMA Project (www.proxima-project.eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation
under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft
Towards limiting the impact of timing anomalies in complex real-time processors
Timing verification of embedded critical real-time systems is hindered by complex designs. Timing anomalies, deeply analyzed in static timing analysis, require specific solutions to bound their impact. For the first time, we study the concept and impact of timing anomalies in measurement-based timing analysis, the most used in industry, showing that they require to be considered and handled differently. In addition, we analyze anomalies in the context of Measurement-Based Probabilistic Timing Analysis, which simplifies quantifying their impact.Peer ReviewedPostprint (published version