33 research outputs found

    Resilient random modulo cache memories for probabilistically-analyzable real-time systems

    Get PDF
    Fault tolerance has often been assessed separately in safety-related real-time systems, which may lead to inefficient solutions. Recently, Measurement-Based Probabilistic Timing Analysis (MBPTA) has been proposed to estimate Worst-Case Execution Time (WCET) on high performance hardware. The intrinsic probabilistic nature of MBPTA-commpliant hardware matches perfectly with the random nature of hardware faults. Joint WCET analysis and reliability assessment has been done so far for some MBPTA-compliant designs, but not for the most promising cache design: random modulo. In this paper we perform, for the first time, an assessment of the aging-robustness of random modulo and propose new implementations preserving the key properties of random modulo, a.k.a. low critical path impact, low miss rates and MBPTA compliance, while enhancing reliability in front of aging by achieving a better – yet random – activity distribution across cache sets.Peer ReviewedPostprint (author's final draft

    Impact of Transient Faults on Timing Behavior and Mitigation with Near-Zero WCET Overhead

    Get PDF
    As time-critical systems require timing guarantees, Worst-Case Execution Times (WCET) have to be employed. However, WCET estimation methods usually assume fault-free hardware. If proper actions are not taken, such fault-free WCET approaches become unsafe, when faults impact the hardware during execution. The majority of approaches, dealing with hardware faults, address the impact of faults on the functional behavior of an application, i.e., denial of service and binary correctness. Few approaches address the impact of faults on the application timing behavior, i.e., time to finish the application, and target faults occurring in memories. However, as the transistor size in modern technologies is significantly reduced, faults in cores cannot be considered negligible anymore. This work shows that faults not only affect the functional behavior, but they can have a significant impact on the timing behavior of applications. To expose the overall impact of faults, we enhance vulnerability analysis to include not only functional, but also timing correctness, and show that faults impact WCET estimations. As common techniques to deal with faults, such as watchdog timers and re-execution, have large timing overhead for error detection and correction, we propose a mechanism with near-zero and bounded timing overhead. A RISC-V core is used as a case study. The obtained results show that faults can lead up to almost 700% increase in the maximum observed execution time between fault-free and faulty execution without protection, affecting the WCET estimations. On the contrary, the proposed mechanism is able to restore fault-free WCET estimations with a bounded overhead of 2 execution cycles

    Probabilistic Worst-Case Timing Analysis: Taxonomy and Comprehensive Survey

    Get PDF
    "© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Computing Surveys, {VOL 52, ISS 1, (February 2019)} https://dl.acm.org/doi/10.1145/3301283"[EN] The unabated increase in the complexity of the hardware and software components of modern embedded real-time systems has given momentum to a host of research in the use of probabilistic and statistical techniques for timing analysis. In the last few years, that front of investigation has yielded a body of scientific literature vast enough to warrant some comprehensive taxonomy of motivations, strategies of application, and directions of research. This survey addresses this very need, singling out the principal techniques in the state of the art of timing analysis that employ probabilistic reasoning at some level, building a taxonomy of them, discussing their relative merit and limitations, and the relations among them. In addition to offering a comprehensive foundation to savvy probabilistic timing analysis, this article also identifies the key challenges to be addressed to consolidate the scientific soundness and industrial viability of this emerging field.This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015-65316-P, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772773), and the HiPEAC Network of Excellence. Jaume Abella was partially supported by the Ministry of Economy and Competitiveness under a Ramon y Cajal postdoctoral fellowship (RYC-2013-14717). Enrico Mezzetti has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporación postdoctoral fellowship No. IJCI-2016-27396.Cazorla, FJ.; Kosmidis, L.; Mezzetti, E.; Hernández Luz, C.; Abella, J.; Vardanega, T. (2019). Probabilistic Worst-Case Timing Analysis: Taxonomy and Comprehensive Survey. ACM Computing Surveys. 52(1):1-35. https://doi.org/10.1145/3301283S13552

    Timing Predictability in Future Multi-Core Avionics Systems

    Full text link

    Multi-core devices for safety-critical systems: a survey

    Get PDF
    Multi-core devices are envisioned to support the development of next-generation safety-critical systems, enabling the on-chip integration of functions of different criticality. This integration provides multiple system-level potential benefits such as cost, size, power, and weight reduction. However, safety certification becomes a challenge and several fundamental safety technical requirements must be addressed, such as temporal and spatial independence, reliability, and diagnostic coverage. This survey provides a categorization and overview at different device abstraction levels (nanoscale, component, and device) of selected key research contributions that support the compliance with these fundamental safety requirements.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-65316-P, Basque Government under grant KK-2019-00035 and the HiPEAC Network of Excellence. The Spanish Ministry of Economy and Competitiveness has also partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    Development and certification of mixed-criticality embedded systems based on probabilistic timing analysis

    Get PDF
    An increasing variety of emerging systems relentlessly replaces or augments the functionality of mechanical subsystems with embedded electronics. For quantity, complexity, and use, the safety of such subsystems is an increasingly important matter. Accordingly, those systems are subject to safety certification to demonstrate system's safety by rigorous development processes and hardware/software constraints. The massive augment in embedded processors' complexity renders the arduous certification task significantly harder to achieve. The focus of this thesis is to address the certification challenges in multicore architectures: despite their potential to integrate several applications on a single platform, their inherent complexity imperils their timing predictability and certification. Recently, the Measurement-Based Probabilistic Timing Analysis (MBPTA) technique emerged as an alternative to deal with hardware/software complexity. The innovation that MBPTA brings about is, however, a major step from current certification procedures and standards. The particular contributions of this Thesis include: (i) the definition of certification arguments for mixed-criticality integration upon multicore processors. In particular we propose a set of safety mechanisms and procedures as required to comply with functional safety standards. For timing predictability, (ii) we present a quantitative approach to assess the likelihood of execution-time exceedance events with respect to the risk reduction requirements on safety standards. To this end, we build upon the MBPTA approach and we present the design of a safety-related source of randomization (SoR), that plays a key role in the platform-level randomization needed by MBPTA. And (iii) we evaluate current certification guidance with respect to emerging high performance design trends like caches. Overall, this Thesis pushes the certification limits in the use of multicore and MBPTA technology in Critical Real-Time Embedded Systems (CRTES) and paves the way towards their adoption in industry.Una creciente variedad de sistemas emergentes reemplazan o aumentan la funcionalidad de subsistemas mecánicos con componentes electrónicos embebidos. El aumento en la cantidad y complejidad de dichos subsistemas electrónicos así como su cometido, hacen de su seguridad una cuestión de creciente importancia. Tanto es así que la comercialización de estos sistemas críticos está sujeta a rigurosos procesos de certificación donde se garantiza la seguridad del sistema mediante estrictas restricciones en el proceso de desarrollo y diseño de su hardware y software. Esta tesis trata de abordar los nuevos retos y dificultades dadas por la introducción de procesadores multi-núcleo en dichos sistemas críticos: aunque su mayor rendimiento despierta el interés de la industria para integrar múltiples aplicaciones en una sola plataforma, suponen una mayor complejidad. Su arquitectura desafía su análisis temporal mediante los métodos tradicionales y, asimismo, su certificación es cada vez más compleja y costosa. Con el fin de lidiar con estas limitaciones, recientemente se ha desarrollado una novedosa técnica de análisis temporal probabilístico basado en medidas (MBPTA). La innovación de esta técnica, sin embargo, supone un gran cambio cultural respecto a los estándares y procedimientos tradicionales de certificación. En esta línea, las contribuciones de esta tesis están agrupadas en tres ejes principales: (i) definición de argumentos de seguridad para la certificación de aplicaciones de criticidad-mixta sobre plataformas multi-núcleo. Se definen, en particular, mecanismos de seguridad, técnicas de diagnóstico y reacción de faltas acorde con el estándar IEC 61508 sobre una arquitectura multi-núcleo de referencia. Respecto al análisis temporal, (ii) presentamos la cuantificación de la probabilidad de exceder un límite temporal y su relación con los requisitos de reducción de riesgos derivados de los estándares de seguridad funcional. Con este fin, nos basamos en la técnica MBPTA y presentamos el diseño de una fuente de números aleatorios segura; un componente clave para conseguir las propiedades aleatorias requeridas por MBPTA a nivel de plataforma. Por último, (iii) extrapolamos las guías actuales para la certificación de arquitecturas multi-núcleo a una solución comercial de 8 núcleos y las evaluamos con respecto a las tendencias emergentes de diseño de alto rendimiento (caches). Con estas contribuciones, esta tesis trata de abordar los retos que el uso de procesadores multi-núcleo y MBPTA implican en el proceso de certificación de sistemas críticos de tiempo real y facilita, de esta forma, su adopción por la industria.Postprint (published version

    Static Probabilistic Timing Analysis for Real-Time Embedded Systems in Presence of Faults

    Get PDF
    RÉSUMÉ Une mémoire cache est le lien entre le processeur et la mémoire principale. Elle permet de réduire considérablement les temps d’accès aux blocs de mémoire dans un système embarqué temps-réel et critique (CRTES), ce qui influence énormément son comportement temporel. Des caches à accès aléatoire—caches avec une politique de remplacement aléatoire—ont été proposées dans le but d’améliorer les estimations du comportement temporel des CRTES, et cela en diminuant les cas pathologiques. Les Measurement Based Probabilistic Timing Analysis (MBPTA) et Static Probabilistic Timing Analysis (SPTA) sont deux méthodes qui ciblent à estimer le pire temps d’exécution (Worst Case Execution Time probabiliste - pWCET) d’une façon probabiliste et sécuritaire pour les caches aléatoires. À travers cette dissertation, on présente des travaux de recherche concernant l’estimation temporelle basée sur la méthode SPTA. L’état de l’art sur les méthodologies SPTA fournissent des estimations sécuritaires et strictes. En revanche, au vu de la réduction d’échelle des technologies des semiconducteurs utilisés pour la mise en oeuvre des composants faisant partie des CRETS, les caches sur puce sont de plus en plus prédisposés aux pannes. Par conséquent, nous avons développé des méthodologies SPTA pour l’estimation des pWCETs en présence de pannes. Nous avons effectué également des évaluations de l’impact de ces fautes sur les comportements temporels. Afin d’examiner les pannes, nous avons modélisé dans un premier temps les pannes transitoires et permanentes. Une panne transitoire représente un changement d’état temporaire. Le système peut ainsi être restauré en utilisant des techniques de détection et de correction des pannes. D’un autre côté, une panne permanente introduit un changement permanent. Elle persiste après son apparition et affecte en conséquence le comportement général du système. Nous avons alors proposé une méthode basée sur les chaînes de Markov afin de modéliser les états de disposition de la mémoire. Pour chaque accès à un bloc de mémoire, le changement de l’état est calculé en utilisant une matrice de transition, tout en tenant compte des impacts des fautes transitoires. Nous avons également utilisé différents types de modèles de la chaîne de Markov pour représenter le système ayant subi un nombres différent de pannes permanentes. Les expériences montrent que notre méthode SPTA assure des résultats précis en présence des pannes transitoires et permanentes.----------ABSTRACT : A cache is typically the bridge between a processor and its main memory. It significantly reduces the access latencies to memory blocks and its timing behavior. Random caches—caches with a random replacement policy—have been proposed to improve timing behavior estimates in critical real-time embedded systems (CRTESs) by reducing pathological cases due to systematic cache misses. Measurement Based Probabilistic Timing Analysis (MBPTA)and Static Probabilistic Timing Analysis (SPTA) aim at providing safe probabilistic Worst Case Execution Time (pWCET) estimates for random caches. In this dissertation, we present research work on timing estimation based on SPTA. State-of-the-art SPTA methodologies produce safe and tight pWCET estimates. However, as semiconductor technology scales down, CRTES components—especially their on-chip caches—become prone to faults. Consequently,we developed SPTA methodologies to estimate pWCETs in the presence of faults, and evaluated the impacts of faults on timing behaviors. To investigate faults, we first defined transient and permanent fault models. A transient fault represents a temporary change of state. The system with transient faults can be recovered using fault detection and correction techniques. A permanent fault represents a permanent change of state. It persists after its occurrence and affects the system’s behavior afterwards. Additionally, we proposed a Markov chain method to model memory layout states. For each memory block access, the state changes are calculated using a transition matrix. The transient fault impacts were integrated into the transition matrix computation, and we used different groups of Markov chain models to represent the system with different number of permanent faults. Experiments showed that our SPTA method provided accurate results in the presence of both transient and permanent faults

    On the limits of probabilistic timing analysis

    Get PDF
    Over the last years, we are witnessing the steady and rapid growth of Critica! Real-Time Embedded Systems (CRTES) industries, such as automotive and aerospace. Many of the increasingly-complex CRTES' functionalities that are currently implemented with mechanical means are moving towards to an electromechanical implementation controlled by critica! software. This trend results in a two-fold consequence. First, the size and complexity of critical-software increases in every new embedded product. And second, high-performance hardware features like caches are more frequently used in real-time processors. The increase in complexity of CRTES challenges the validation and verification process, a necessary step to certify that the system is safe for deployment. Timing validation and verification includes the computation of the Worst-Case Execution Time (WCET) estimates, which need to be trustworthy and tight. Traditional timing analysis are challenged by the use of complex hardware/software, resulting in low-quality WCET estimates, which tend to add significant pessimism to guarantee estimates' trustworthiness. This calls for new solutions that help tightening WCET estimates in a safe manner. In this Thesis, we investigate the novel Measurement-Based Probabilistic Timing Analysis (MBPTA), which in its original version already shows potential to deliver trustworthy and tight WCETs for tasks running on complex systems. First, we propose a methodology to assess and ensure that ali cache memory layouts, which can significantly impact WCET, have been adequately factored in the WCET estimation process. Second, we provide a solution to achieve simultaneously cache representativeness and full path coverage. This solution provides evidence proving that WCET estimates obtained are valid for ali program execution paths regardless of how code and data are laid out in the cache. Lastly, we analyse and expose the main misconceptions and pitfalls that can prevent a sound application of WCET analysis based on extreme value theory, which is used as part of MBPTA.En los últimos años, se ha podido observar un crecimiento rápido y sostenido de la industria de los sistemas embebidos críticos de tiempo real (abreviado en inglés CRTES}, como por ejemplo la industria aeronáutica o la automovilística. En un futuro cercano, muchas de las funcionalidades complejas que actualmente se están implementando a través de sistemas mecánicos en los CRTES pasarán a ser controladas por software crítico. Esta tendencia tiene dos consecuencias claras. La primera, el tamaño y la complejidad del software se incrementará en cada nuevo producto embebido que se lance al mercado. La segunda, las técnicas hardware destinadas a alto rendimiento (por ejemplo, memorias caché) serán usadas más frecuentemente en los procesadores de tiempo real. El incremento en la complejidad de los CRTES impone un reto en los procesos de validación y verificación de los procesadores, un paso imprescindible para certificar que los sistemas se pueden comercializar de forma segura. La validación y verificación del tiempo de ejecución incluye la estimación del tiempo de ejecución en el peor caso (abreviado en inglés WCET}, que debe ser precisa y certera. Desafortunadamente, los procesos tradicionales para analizar el tiempo de ejecución tienen problemas para analizar las complejas combinaciones entre el software y el hardware, produciendo estimaciones del WCET de mala calidad y conservadoras. Para superar dicha limitación, es necesario que florezcan nuevas técnicas que ayuden a proporcionar WCET más precisos de forma segura y automatizada. En esta Tesis se profundiza en la investigación referente al análisis probabilístico de tiempo de ejecución basado en medidas (abreviado en inglés MBPTA), cuyas primeras implementaciones muestran potencial para obtener un WCET preciso y certero en tareas ejecutadas en sistemas complejos. Primero, se propone una metodología para certificar que todas las distribuciones de la memoria caché, una de las estructuras más complejas de los CRTES, han sido contabilizadas adecuadamente durante el proceso de estimación del WCET. Segundo, se expone una solución para conseguir a la vez representatividad en la memoria caché y cobertura total en caminos críticos del programa. Dicha solución garantiza que la estimación WCET obtenida es válida para todos los caminos de ejecución, independientemente de como el código y los datos se guardan en la memoria caché. Finalmente, se analizan y discuten los mayores malentendidos y obstáculos que pueden prevenir la aplicabilidad del análisis de WCET basado en la teoría de valores extremos, la cual forma parte del MBPTA.Postprint (published version

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Schedulability Analysis of Real-Time Systems with Uncertain Worst-Case Execution Times

    Get PDF
    Schedulability analysis is about determining whether a given set of real-time software tasks are schedulable, i.e., whether task executions always complete before their specified deadlines. It is an important activity at both early design and late development stages of real-time systems. Schedulability analysis requires as input the estimated worst-case execution times (WCET) for software tasks. However, in practice, engineers often cannot provide precise point WCET estimates and prefer to provide plausible WCET ranges. Given a set of real-time tasks with such ranges, we provide an automated technique to determine for what WCET values the system is likely to meet its deadlines, and hence operate safely. Our approach combines a search algorithm for generating worst-case scheduling scenarios with polynomial logistic regression for inferring safe WCET ranges. We evaluated our approach by applying it to a satellite on-board system. Our approach efficiently and accurately estimates safe WCET ranges within which deadlines are likely to be satisfied with high confidence
    corecore