959 research outputs found

    Cross-layer reliability evaluation, moving from the hardware architecture to the system level: A CLERECO EU project overview

    Get PDF
    Advanced computing systems realized in forthcoming technologies hold the promise of a significant increase of computational capabilities. However, the same path that is leading technologies toward these remarkable achievements is also making electronic devices increasingly unreliable. Developing new methods to evaluate the reliability of these systems in an early design stage has the potential to save costs, produce optimized designs and have a positive impact on the product time-to-market. CLERECO European FP7 research project addresses early reliability evaluation with a cross-layer approach across different computing disciplines, across computing system layers and across computing market segments. The fundamental objective of the project is to investigate in depth a methodology to assess system reliability early in the design cycle of the future systems of the emerging computing continuum. This paper presents a general overview of the CLERECO project focusing on the main tools and models that are being developed that could be of interest for the research community and engineering practice

    Revisiting Vulnerability Analysis in Modern Microprocessors

    Get PDF
    Abstract-The notion of Architectural Vulnerability Factor (AVF) has been extensively used to evaluate various aspects of design robustness. While AVF has been a very popular way of assessing element resiliency, its calculation requires rigorous and extremely time-consuming experiments. Furthermore, recent radiation studies in 90 nm and 65 nm technology nodes demonstrate that up to 55 percent of Single Event Upsets (SEUs) result in Multiple Bit Upsets (MBUs), and thus the Single Bit Flip (SBF) model employed in computing AVF needs to be reassessed. In this paper, we present a method for calculating the vulnerability of modern microprocessors -using Statistical Fault Injection (SFI)-several orders of magnitude faster than traditional SFI techniques, while also using more realistic fault models which reflect the existence of MBUs. Our method partitions the design into various hierarchical levels and systematically performs incremental fault injections to generate vulnerability estimates. The presented method has been applied on an Intel microprocessor and an Alpha 21264 design, accelerating fault injection by 15Â, on average, and reducing computational cost for investigating the effect of MBUs. Extensive experiments, focusing on the effect of MBUs in modern microprocessors, corroborate that the SBF model employed by current vulnerability estimation tools is not sufficient to accurately capture the increasing effect of MBUs in contemporary processes

    A vulnerability factor for ECC-protected memory

    Get PDF
    Fault injection studies and vulnerability analyses have been used to estimate the reliability of data structures in memory. We survey these metrics and look at their adequacy to describe the data stored in ECC-protected memory. We also introduce FEA, a new metric improving on the memory derating factor by ignoring a class of false errors. We measure all metrics using simulations and compare them to the outcomes of injecting errors in real runs. This in-depth study reveals that FEA provides more accurate results than any state-of-the-art vulnerability metric. Furthermore, FEA gives an upper bound on the failure probability due to an error in memory, making this metric a tool of choice to quantify memory vulnerability. Finally, we show that ignoring these false errors reduces the failure rate on average by 12.75% and up to over 45%.This work has been supported by the RoMoL ERC Advanced Grant (GA 321253), by the European HiPEAC Network of Excellence, by the Spanish Ministry of Economy and Competitiveness (contract TIN2015-65316- P), by the Generalitat de Catalunya (contracts 2017-SGR-1414 and 2017- SGR-1328), by the Spanish Government (Severo Ochoa grant SEV-2015- 0493) and by the European Union’s Horizon 2020 research and innovation programme (grant agreements 671697 and 779877). L. Jaulmes has been partially supported by the Spanish Ministry of Education, Culture and Sports under grant FPU2013/06982. M. Moreto and M. Casas have been partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under Ramon y Cajal fellowships RYC-2016-21104 and RYC-2017-23269.Peer ReviewedPostprint (author's final draft

    Radiation-Induced Error Criticality in Modern HPC Parallel Accelerators

    Get PDF
    In this paper, we evaluate the error criticality of radiation-induced errors on modern High-Performance Computing (HPC) accelerators (Intel Xeon Phi and NVIDIA K40) through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. Also, we provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. We apply the selected metrics to experimental results obtained in various radiation test campaigns for a total of more than 400 hours of beam time per device. The amount of data we gathered allows us to evaluate the error criticality of a representative set of algorithms from HPC suites. Additionally, based on the characteristics of the tested algorithms, we draw generic reliability conclusions for broader classes of codes. We show that arithmetic operations are less critical for the K40, while Xeon Phi is more reliable when executing particles interactions solved through Finite Difference Methods. Finally, iterative stencil operations seem the most reliable on both architectures.This work was supported by the STIC-AmSud/CAPES scientific cooperation program under the EnergySFE research project grant 99999.007556/2015-02, EU H2020 Programme, and MCTI/RNP-Brazil under the HPC4E Project, grant agreement n° 689772. Tested K40 boards were donated thanks to Steve Keckler, Timothy Tsai, and Siva Hari from NVIDIA.Postprint (author's final draft

    Microarchitecture-level reliability assessment of multi-bit upsets in processors

    Get PDF
    Η συνεχιζόμενη μείωση στις διαστάσεις των μοντέρνων Ολοκληρωμένων Κυκλωμάτων (Ο.Κ.) οδηγούν στον ολοένα και πιο σημαντικό ρόλο των αξιολογήσεων αξιοπιστίας και ευπάθειας στον επεξεργαστή, σε πρόωρα στάδια της σχεδίασης (pre-silicon validation). Με την εξέλιξη των τεχνολογικών κόμβων, τα αποτελέσματα της ακτινοβολίας παίζουν μεγαλύτερο ρόλο, οδηγώντας σε πιο σημαντικά αποτελέσματα στις συσκευές, με μια επιπρόσθετη αύξηση σε σφάλματα πολλαπλών bit. Συνεπώς, είναι καθοριστική η χρησιμοποίηση κάποιων κοινών μηχανισμών εισαγωγής σφαλμάτων για την αξιολόγηση κάθε σχεδίου, χρησιμοποιώντας προσομοιωτές μικρό-αρχιτεκτονικής, που μας παρέχουν ευελιξία και βελτιωμένη ταχύτητα, σε σύγκριση με τα σχέδια Επιπέδου Μεταφοράς Καταχωρητή. Αυτή η διπλωματική εργασία, εστιάζει στα σφάλματα πολλών bit, παρουσιάζοντας τα αποτελέσματα τους σε διαφορετικές δομές ενός μικρό-αρχιτεκτονικού μοντέλου του επεξεργαστή ARM Cortex-A9, που έχει υλοποιηθεί στον προσομοιωτή Gem5. Για αυτό τον λόγο χρησιμοποιείται για τις εκστρατείες εισαγωγής σφαλμάτων o GeFIN (Gem-5 based Fault INjector), με την προσθήκη μιας βελτιωμένης γεννήτριας σφαλμάτων, για τη δημιουργία μασκών σφαλμάτων με κάποια πολύ συγκεκριμένα χαρακτηριστικά. Η βελτιωμένη έκδοση της γεννήτριας, περιλαμβάνει την δυνατότητα για την εισαγωγή σφαλμάτων πολλών bit σε γειτονικές περιοχές κάθε δομής, μια πολύ συνηθισμένη περίπτωση σε πραγματικά περιβάλλοντα. Η γεννήτρια περιλαμβάνει επίσης της δυνατότητα για την εισαγωγή σφαλμάτων σε διεμπλεκόμενες (interleaved) μνήμες, ένας μηχανισμός που χρησιμοποιείται για το περιορισμό των αποτελεσμάτων των σφαλμάτων πολλών bit. Τα αποτελέσματα αυτής της διπλωματικής εργασίας, έδειξαν ότι κάποιες συγκεκριμένες δομές του επεξεργαστή-υπό-εξέταση (π.χ. ο Instruction Translation Lookaside Buffer) έδειξαν μεγάλη ευπάθεια στην εισαγωγή σφαλμάτων, με ποσοστά έως και 25% σωστών εκτελέσεων για 1000 πειράματα, ενώ άλλες δομές όπως οι Κρυφές Μνήμες Εντολών και Δεδομένων 1ου επιπέδου και η Κρυφή Μνήμη 2ου επιπέδου, έδειξαν μεγαλύτερη ευπάθεια στον αυξανόμενο αριθμό εισαγόμενων σφαλμάτων, με διακυμάνσεις μέχρι και 24% ανάμεσα στη εισαγωγή ενός και τριών σφαλμάτων στην κρυφή μνήμη 1ου επιπέδου. Αυτοί οι αριθμοί σχετιζόταν με τον θεωρητικό Architectural Vulnerability Factor (AVF) και ήταν ανεξάρτητοι από την τεχνολογία κατασκευής. Πραγματοποιήθηκε μια επέκταση στους υπολογισμούς για τον υπολογισμό των AVFs για κάθε τεχνολογικό κόμβο από 250 έως 22 nm, που έδειξε αυξημένα ποσοστά AVF όσο ο κόμβος μειωνόταν. Τέλος, πραγματοποιήθηκε μια ανάλυση αξιοπιστίας, χρησιμοποιώντας την μετρική Failures in Time (FIT), που έδειξε του υψηλότερους αριθμούς για την Κρυφή Μνήμη 2ου επιπέδου, κυρίως λόγω του μεγέθους της (4 MBits) με ένα FIT ίσο με 822.9 στα 130 nm. Ο FIT του επεξεργαστή είχε μέγιστο το 918 στον ίδιο κόμβο, ενώ παρατηρήσαμε ότι για κόμβους μικρότερους από 130 nm οι FIT μειώνονται, κυρίως επειδή υπάρχει μείωση στον παράγοντα raw FIT κάθε τεχνολογίας.The continuing decrease in feature sizes for modern Integrated Circuits (ICs) leads to an ever-important role of reliability and vulnerability assessments on the core in early stages of the design (pre-silicon validation). With the increase of the lithography resolution in recent technological nodes, the radiation effects play a bigger role, leading to more severe effects in the devices and increased numbers of multi-bit faults. Therefore, it is crucial to utilize some common fault injection mechanisms to evaluate each design, using micro-architectural simulators, which provide us with flexibility and improved latency, compared to RTL (Register Transfer Level) designs. This thesis focuses on the multi-bit faults, showing their effects on different components of a microarchitectural model of the ARM Cortex-A9 core, implemented on the Gem5 simulator. For that, the GeFIN (Gem-5 based Fault INjector) is used for the fault injection campaigns, with the addition of an improved fault mask generation tool for the creation of fault masks with some particular characteristics. The improved version of the fault mask generator includes the capability for the injection of multi-bit faults in adjacent areas of a structure, a case very common in real environments. The generator also includes the ability to insert faults in interleaved memories, a widely used technique to mitigate the effects of multiple bit upsets. The results of this study showed that some specific components of the core under test (e.g. the Instruction Translation Lookaside Buffer) showed significant vulnerability to fault injection, with rates as low as 25% correct executions for 1000 experiments, while others like the Level 1 Data/Instruction Caches and the Level 2 Cache showed bigger vulnerability to the increasing number of faults injected, with a variation of as high as 24% between single and triple bit fault injection for the L1 D-Cache. Those numbers were related to the “theoretical” Architectural Vulnerability Factor (AVF), independent of the fabrication technology node. An extension in the calculation was done to compute the AVFs for each technology node from 250 nm to 22 nm, showing increasing AVF rates as the node decreases. Lastly, a reliability assessment was done, using the Failures in Time (FIT) metric, which showed the highest numbers for the Level 2 Cache, primarily because of its size (4 MBits) with a FIT of 822.9 at the 130 nm. The FIT of the core showed a high of 918 at the same node, while we observed that for nodes smaller than 130 nm the FITs decreased primarily because of the decrease of the raw FIT factor of each technology

    Reliable Software for Unreliable Hardware - A Cross-Layer Approach

    Get PDF
    A novel cross-layer reliability analysis, modeling, and optimization approach is proposed in this thesis that leverages multiple layers in the system design abstraction (i.e. hardware, compiler, system software, and application program) to exploit the available reliability enhancing potential at each system layer and to exchange this information across multiple system layers

    Ground-truth prediction to accelerate soft-error impact analysis for iterative methods

    Get PDF
    Understanding the impact of soft errors on applications can be expensive. Often, it requires an extensive error injection campaign involving numerous runs of the full application in the presence of errors. In this paper, we present a novel approach to arriving at the ground truth-the true impact of an error on the final output-for iterative methods by observing a small number of iterations to learn deviations between normal and error-impacted execution. We develop a machine learning based predictor for three iterative methods to generate ground-truth results without running them to completion for every error injected. We demonstrate that this approach achieves greater accuracy than alternative prediction strategies, including three existing soft error detection strategies. We demonstrate the effectiveness of the ground truth prediction model in evaluating vulnerability and the effectiveness of soft error detection strategies in the context of iterative methods.This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Award Number 66905, program manager Lucy Nowell. Pacific Northwest National Laboratory is operated by Battelle for DOE under Contract DE-AC05-76RL01830.Peer ReviewedPostprint (author's final draft

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
    corecore