2,812 research outputs found

    Improving the Robustness of Redundant Execution with Register File Randomization

    Full text link
    [EN] Staggered Redundant execution (SRE) is a fault-tolerance mechanism that has been widely deployed in the context of safety-critical applications. SRE not only protects the system in the presence of faults but also helps relaxing safety requirements of individual elements. However, in this paper, we show that SRE does not effectively protect the system against a wide range of faults and thus, new mechanisms to increase the diversity of homogeneous cores are needed. In this paper, we propose Register File Randomization (RFR), a low-cost diversity mechanism that significantly increases the robustness of homogeneous multicores in front of common-cause faults (CCFs) and register file wearout. Our results show that RFR completely removes the failure rate for register file CCFs for certain workloads and reduces by a factor of 5X the impact of stress related register file aging for the workloads analysed. Our implementation requires less than 50 RTL lines of code and the area (FPGA logic) overhead of RFR is less than 0.2% of a 64-bit RISC-V core FPGA implementation.This work has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 877056 and the Agencia Estatal de Investigacion from Spain under grant agreement no. PCI2020-112092, and from the the European Unions Horizon 2020 research and innovation programme under grant agreement no. 871467.Tuzov, I.; Andreu, P.; Medina, L.; Picornell-Sanjuan, T.; Robles MartĂ­nez, A.; LĂłpez RodrĂ­guez, PJ.; Flich Cardo, J.... (2021). Improving the Robustness of Redundant Execution with Register File Randomization. IEEE. 1-9. https://doi.org/10.1109/ICCAD51958.2021.96434661

    On the use of simple dynamical systems for climate predictions: A Bayesian prediction of the next glacial inception

    Full text link
    Over the last few decades, climate scientists have devoted much effort to the development of large numerical models of the atmosphere and the ocean. While there is no question that such models provide important and useful information on complicated aspects of atmosphere and ocean dynamics, skillful prediction also requires a phenomenological approach, particularly for very slow processes, such as glacial-interglacial cycles. Phenomenological models are often represented as low-order dynamical systems. These are tractable, and a rich source of insights about climate dynamics, but they also ignore large bodies of information on the climate system, and their parameters are generally not operationally defined. Consequently, if they are to be used to predict actual climate system behaviour, then we must take very careful account of the uncertainty introduced by their limitations. In this paper we consider the problem of the timing of the next glacial inception, about which there is on-going debate. Our model is the three-dimensional stochastic system of Saltzman and Maasch (1991), and our inference takes place within a Bayesian framework that allows both for the limitations of the model as a description of the propagation of the climate state vector, and for parametric uncertainty. Our inference takes the form of a data assimilation with unknown static parameters, which we perform with a variant on a Sequential Monte Carlo technique (`particle filter'). Provisional results indicate peak glacial conditions in 60,000 years.Comment: superseeds the arXiv:0809.0632 (which was published in European Reviews). The Bayesian section has been significantly expanded. The present version has gone scientific peer review and has been published in European Physics Special Topics. (typo in DOI and in Table 1 (psi -> theta) corrected on 25th August 2009

    Timing of autonomous driving software: problem analysis and prospects for future solutions

    Get PDF
    The software used to implement advanced functionalities in critical domains (e.g. autonomous operation) impairs software timing. This is not only due to the complexity of the underlying high-performance hardware deployed to provide the required levels of computing performance, but also due to the complexity, non-deterministic nature, and huge input space of the artificial intelligence (AI) algorithms used. In this paper, we focus on Apollo, an industrial-quality Autonomous Driving (AD) software framework: we statistically characterize its observed execution time variability and reason on the sources behind it. We discuss the main challenges and limitations in finding a satisfactory software timing analysis solution for Apollo and also show the main traits for the acceptability of statistical timing analysis techniques as a feasible path. While providing a consolidated solution for the software timing analysis of Apollo is a huge effort far beyond the scope of a single research paper, our work aims to set the basis for future and more elaborated techniques for the timing analysis of AD software.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P, the SuPerCom European Research Council (ERC) project under the European Union’s Horizon 2020 research and innovation programme (grant agreement No.772773), and the HiPEAC Network of Excellence. MINECO partially supported Enrico Mezzetti under Juan de la Cierva-Incorporación postdoctoral fellowship (IJCI-2016-27396), and Leonidas Kosmidis under Juan de la Cierva-Formación postdoctoral fellowship (FJCI-2017-34095).Peer ReviewedPostprint (author's final draft

    Self-Test Mechanisms for Automotive Multi-Processor System-on-Chips

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Accurately Measuring Contention in Mesh NoCs in Time-Sensitive Embedded Systems

    Get PDF
    [EN] The computing capacity demanded by embedded systems is on the rise as software implements more functionalities, ranging from best-effort entertainment functions to performance-guaranteed safety-related functions. Heterogeneous manycore processors, using wormhole mesh (wmesh) Network-on-Chips (NoCs) as the main communication means, and contention block among applications, are increasingly considered to deliver the required computing performance. Most research efforts on software timing analysis have focused on deriving bounds (estimates) to the contention that tasks can suffer when accessing wmesh NoCs. However, less effort has been devoted to an equally important problem, namely, accurately measuring the actual contention tasks generate each other on the wmesh which is instrumental during system validation to diagnose any software timing misbehavior and determine which tasks are particularly affected by contention on specific wmesh routers. In this article, we work on the foundations of contention measuring in wmesh NoCs and propose and explain the rationale of a golden metric, called task PairWise Contention (PWC). PWC allows ascribing the actual share of the contention a given task suffers in the wmesh to each of its co-runner tasks at packet level. We also introduce and formalize a Golden Reference Value (GRV) for PWC that specifically defines a criterion to fairly break down the contention suffered by a task among its co-runner tasks in the wmesh. Our evaluation shows that GRV effectively captures how contention occurs by identifying the actual core (task) causing contention and whether contention is caused by local or remote interference in the wmesh.This work has been supported by the Spanish Ministry of Science and Innovation under grant PID2019-107255GB-C21 funded by MCIN/AEI/10.13039/501100011033 and the European Research Council (ERC) under the EU's Horizon 2020 research and innovation programme (grant agreement No. 772773).Cardona, J.; Hernández Luz, C.; Abella, J.; Mezzetti, E.; Cazorla, FJ. (2023). Accurately Measuring Contention in Mesh NoCs in Time-Sensitive Embedded Systems. ACM Transactions on Design Automation of Electronic Systems. 28(3). https://doi.org/10.1145/358200628

    Design of a diversity enforcement module for safety critical processing systems

    Get PDF
    Safety-critical systems must adhere to specific functional safety standards describing the development process for those systems. One key requirement is the ability to avoid a single fault from causing a system failure, or in other words, avoiding Common Cause Failures (CCFs). Redundancy is a usual solution against CCFs. However, some specific CCFs may affect redundant components identically (e.g., voltage droops, clock interferences), hence potentially leading to identical errors that may go unnoticed and cause a failure. Diversity is often deployed along with redundancy to avoid also those CCFs. In the particular case of computing elements (e.g., cores), this is usually realized with some form of lockstep execution where two identical cores execute the same software, but with some time shift among them (aka staggering). Therefore, both cores have different state at any point in time and faults affecting both cores lead to different errors, which can be detected by comparing the outputs. Unfortunately, existing solutions have some non-negligible costs: (i) hardware-only solutions hide half of the cores making them non-user visible, hence halving platform performance even for non-critical tasks. Conversely, (ii) software-only solutions are much more flexible but impose the use of a third core to run the lockstep monitor, and require large staggering which has significant impact in performance for short programs. This thesis devises a new solution aiming at combining the advantages of existing solutions. Our proposal, a hardware diversity-enforcement module (referred to as SafeDE), is an efficient hardware realization of the software monitor. Therefore, it does not hide any core to the end user, it does not require a third core for monitoring purposes, and allows operating with tiny staggering (e.g., few tens of cycles instead of hundreds of thousands as required for the software-only solution). We implement and integrate SafeDE in a space multicore prototype in an FPGA and validate that it effectively achieves its requirements with negligible hardware costs. Moreover, this work has already led to the publication of two peer-reviewed articles in especialized conferences and journals

    Hybrid PDE solver for data-driven problems and modern branching

    Full text link
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM
    • …
    corecore