856 research outputs found

    Analysis of Single Event Upsets Propagation at Register Transfer Level in Combinational and Sequential Circuits Based on Satisfiability Modulo Theories

    Get PDF
    The progressive scaling of semiconductor technologies has led to significant performance improvements in digital designs. However, ultra-deep sub-micron technologies have increased the vulnerability of VLSI designs to soft errors. In order to allow a cost-effective reliability aware design process, it is critical to assess soft error reliability parameters in early design stages. This thesis proposes a new technique to model, analyze and estimate the propagation of Single Event Upsets (SEUs) in combinational and sequential designs described at the Register Transfer Level (RTL) using Satisfiability Modulo Theories (SMT). The propagation of SEUs through RTL bit-vector constructs is modeled as a Satisfiability problem using the SMT theory of bit-vectors. At first, for combinational designs, two different analysis techniques, concrete and abstract modeling, are used in order to investigate the efficiency and accuracy of a data type reduction technique for soft error analysis. To analyze the vulnerability of the combinational circuits, we compute the Soft Error Rate (SER), which is a summation of the propagation probabilities. Concrete modeling uses two versions of the design, one faulty and one fault-free, in order to analyze SEU propagation. Abstract modeling uses a data type reduction technique to evaluate the difference in performance and accuracy over the first method. Experimental results demonstrate that the loss in accuracy due to abstract modeling depends on the design behavior. However, abstract modeling allows to reduce processing time significantly. Following this first approach, the methodology is then extended to model and analyze SEU propagation in sequential circuits at RTL. In order to estimate the vulnerability of sequential circuits to soft errors, the methodology must be adapted to represent state transitions. To do so, we present an approach that uses circuit unrolling. This approach uses multiple unrolled copies of the design to represent the various state transitions. The fault propagation is then analyzed through a certain number of states. Useful information regarding the vulnerability to SEUs of the sequential circuit can then be generated. The propagation probabilities can be computed from the SEU injection cycle to multiple subsequent cycles. These results are then used to estimate the circuit Soft Error Rate (SER). Experimental results demonstrate the effectiveness and the applicability of the proposed approach. Finally, we present a new methodology to estimate digital circuit vulnerability to soft errors at Register Transfer Level (RTL). Single Event Upsets (SEUs) propagation through RTL bit-vector operations is modeled and analyzed using a different modeling approach based on Satisfiability Modulo Theories (SMTs). The objective of this new approach is to improve the efficiency of the analysis. For instance, the bit-vector reduction operators and arithmetic operators were modeled in SMT to include the fault propagation properties. This approach uses only one copy of the design to do the analysis. This means that the fault propagation properties are embedded within the SMT equivalent of the RTL constructs themselves, and therefore does not require two-copies of the design to analyze. In order to illustrate the practical utilization of our work, we have analyzed different RTL combinational circuits. Experimental results demonstrate that the proposed framework is faster than other comparable contemporary techniques. Moreover, it provides more accurate and detailed results of the circuit vulnerability allowing a more efficient applicability of fault tolerance techniques

    MaxSAT Evaluation 2020 : Solver and Benchmark Descriptions

    Get PDF

    MaxSAT Evaluation 2020 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe

    Fast and accurate SER estimation for large combinational blocks in early stages of the design

    Get PDF
    Soft Error Rate (SER) estimation is an important challenge for integrated circuits because of the increased vulnerability brought by technology scaling. This paper presents a methodology to estimate in early stages of the design the susceptibility of combinational circuits to particle strikes. In the core of the framework lies MASkIt , a novel approach that combines signal probabilities with technology characterization to swiftly compute the logical, electrical, and timing masking effects of the circuit under study taking into account all input combinations and pulse widths at once. Signal probabilities are estimated applying a new hybrid approach that integrates heuristics along with selective simulation of reconvergent subnetworks. The experimental results validate our proposed technique, showing a speedup of two orders of magnitude in comparison with traditional fault injection estimation with an average estimation error of 5 percent. Finally, we analyze the vulnerability of the Decoder, Scheduler, ALU, and FPU of an out-of-order, superscalar processor design.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness and Feder Funds under grant TIN2013-44375-R, by the Generalitat de Catalunya under grant FI-DGR 2016, and by the FP7 program of the EU under contract FP7-611404 (CLERECO).Peer ReviewedPostprint (author's final draft

    Generating Property-Directed Potential Invariants By Backward Analysis

    Full text link
    This paper addresses the issue of lemma generation in a k-induction-based formal analysis of transition systems, in the linear real/integer arithmetic fragment. A backward analysis, powered by quantifier elimination, is used to output preimages of the negation of the proof objective, viewed as unauthorized states, or gray states. Two heuristics are proposed to take advantage of this source of information. First, a thorough exploration of the possible partitionings of the gray state space discovers new relations between state variables, representing potential invariants. Second, an inexact exploration regroups and over-approximates disjoint areas of the gray state space, also to discover new relations between state variables. k-induction is used to isolate the invariants and check if they strengthen the proof objective. These heuristics can be used on the first preimage of the backward exploration, and each time a new one is output, refining the information on the gray states. In our context of critical avionics embedded systems, we show that our approach is able to outperform other academic or commercial tools on examples of interest in our application field. The method is introduced and motivated through two main examples, one of which was provided by Rockwell Collins, in a collaborative formal verification framework.Comment: In Proceedings FTSCS 2012, arXiv:1212.657

    Goal-Aware Neural SAT Solver

    Full text link
    Modern neural networks obtain information about the problem and calculate the output solely from the input values. We argue that it is not always optimal, and the network's performance can be significantly improved by augmenting it with a query mechanism that allows the network at run time to make several solution trials and get feedback on the loss value on each trial. To demonstrate the capabilities of the query mechanism, we formulate an unsupervised (not depending on labels) loss function for Boolean Satisfiability Problem (SAT) and theoretically show that it allows the network to extract rich information about the problem. We then propose a neural SAT solver with a query mechanism called QuerySAT and show that it outperforms the neural baseline on a wide range of SAT tasks

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Analytical Modeling of High Performance Reconfigurable Computers: Prediction and Analysis of System Performance.

    Get PDF
    The use of a network of shared, heterogeneous workstations each harboring a Reconfigurable Computing (RC) system offers high performance users an inexpensive platform for a wide range of computationally demanding problems. However, effectively using the full potential of these systems can be challenging without the knowledge of the system’s performance characteristics. While some performance models exist for shared, heterogeneous workstations, none thus far account for the addition of Reconfigurable Computing systems. This dissertation develops and validates an analytic performance modeling methodology for a class of fork-join algorithms executing on a High Performance Reconfigurable Computing (HPRC) platform. The model includes the effects of the reconfigurable device, application load imbalance, background user load, basic message passing communication, and processor heterogeneity. Three fork-join class of applications, a Boolean Satisfiability Solver, a Matrix-Vector Multiplication algorithm, and an Advanced Encryption Standard algorithm are used to validate the model with homogeneous and simulated heterogeneous workstations. A synthetic load is used to validate the model under various loading conditions including simulating heterogeneity by making some workstations appear slower than others by the use of background loading. The performance modeling methodology proves to be accurate in characterizing the effects of reconfigurable devices, application load imbalance, background user load and heterogeneity for applications running on shared, homogeneous and heterogeneous HPRC resources. The model error in all cases was found to be less than five percent for application runtimes greater than thirty seconds and less than fifteen percent for runtimes less than thirty seconds. The performance modeling methodology enables us to characterize applications running on shared HPRC resources. Cost functions are used to impose system usage policies and the results of vii the modeling methodology are utilized to find the optimal (or near-optimal) set of workstations to use for a given application. The usage policies investigated include determining the computational costs for the workstations and balancing the priority of the background user load with the parallel application. The applications studied fall within the Master-Worker paradigm and are well suited for a grid computing approach. A method for using NetSolve, a grid middleware, with the model and cost functions is introduced whereby users can produce optimal workstation sets and schedules for Master-Worker applications running on shared HPRC resources

    Multilevel Modeling, Formal Analysis, and Characterization of Single Event Transients Propagation in Digital Systems

    Get PDF
    RÉSUMÉ La croissance exponentielle du nombre de transistors par puce a apporté des progrès considérables aux performances et fonctionnalités des dispositifs semi-conducteurs avec une miniaturisation des dimensions physiques ainsi qu’une augmentation de vitesse. De nos jours, les appareils électroniques utilisés dans un large éventail d’applications telles que les systèmes de divertissement personnels, l’industrie automobile, les systèmes électroniques médicaux, et le secteur financier ont changé notre façon de vivre. Cependant, des études récentes ont démontré que le rétrécissement permanent de la taille des transistors qui s’approchent des dimensions nanométriques fait surgir des défis majeurs. La réduction de la fiabilité au sens large (c.-à-d., la capacité à fournir la fonction attendue) est l’un d’entre eux. Lorsqu’un système est conçu avec une technologie avancée, on s’attend à ce qu’ il connaît plus de défaillances dans sa durée de vie. De telles défaillances peuvent avoir des conséquences graves allant des pertes financières aux pertes humaines. Les erreurs douces induites par la radiation, qui sont apparues d’abord comme une source de panne plutôt exotique causant des anomalies dans les satellites, sont devenues l’un des problèmes les plus difficiles qui influencent la fiabilité des systèmes microélectroniques modernes, y compris les dispositifs terrestres. Dans le secteur médical par exemple, les erreurs douces ont été responsables de l’échec et du rappel de plusieurs stimulateurs cardiaques implantables. En fonction du transistor affecté lors de la fabrication, le passage d’une particule peut induire des perturbations isolées qui se manifestent comme un basculement du contenu d’une cellule de mémoire (c.-à-d., Single Event Upsets (SEU)) ou un changement temporaire de la sortie (sous forme de bruit) dans la logique combinatoire (c.-à-d., Single Event Transients (SETs)). Les SEU ont été largement étudiés au cours des trois dernières décennies, car ils étaient considérés comme la cause principale des erreurs douces. Néanmoins, des études expérimentales ont montré qu’avec plus de miniaturisation technologique, la contribution des SET au taux d’erreurs douces est remarquable et qu’elle peut même dépasser celui des SEU dans les systèmes à haute fréquence [1], [2]. Afin de minimiser l’impact des erreurs douces, l’effet des SET doit être modélisé, prédit et atténué. Toutefois, malgré les progrès considérables accomplis dans la vérification fonctionnelle des circuits numériques, il y a eu très peu de progrès en matiàre de vérification non-fonctionnelle (par exemple, l’analyse des erreurs douces). Ceci est dû au fait que la modélisation et l’analyse des propriétés non-fonctionnelles des SET pose un grand défi. Cela est lié à la nature aléatoire des défauts et à la difficulté de modéliser la variation de leurs caractéristiques lorsqu’ils se propagent.----------ABSTRACT The exponential growth in the number of transistors per chip brought tremendous progress in the performance and the functionality of semiconductor devices associated with reduced physical dimensions and higher speed. Electronic devices used in a wide range of applications such as personal entertainment systems, automotive industry, medical electronic systems, and financial sector changed the way we live nowadays. However, recent studies reveal that further downscaling of the transistor size at nano-scale technology leads to major challenges. Reliability (i.e., ability to provide intended functionality) is one of them, where a system designed in nano-scale nodes is expected to experience more failures in its lifetime than if it was designed using larger technology node size. Such failures can lead to serious conséquences ranging from financial losses to even loss of human life. Soft errors induced by radiation, which were initially considered as a rather exotic failure mechanism causing anomalies in satellites, have become one of the most challenging issues that impact the reliability of modern microelectronic systems, including devices at terrestrial altitudes. For instance, in the medical industry, soft errors have been responsible of the failure and recall of many implantable cardiac pacemakers. Depending on the affected transistor in the design, a particle strike can manifest as a bit flip in a state element (i.e., Single Event Upset (SEU)) or temporally change the output of a combinational gate (i.e., Single Event Transients (SETs)). Initially, SEUs have been widely studied over the last three decades as they were considered to be the main source of soft errors. However, recent experiments show that with further technology downscaling, the contribution of SETs to the overall soft error rate is remarkable and in high frequency systems, it might exceed that of SEUs [1], [2]. In order to minimize the impact of soft errors, the impact of SETs needs to be modeled, predicted, and mitigated. However, despite considerable progress towards developing efficient methodologies for the functional verification of digital designs, advances in non-functional verification (e.g., soft error analysis) have been lagging. This is due to the fact that the modeling and analysis of non-functional properties related to SETs is very challenging. This can be related to the random nature of these faults and the difficulty of modeling the variation in its characteristics while propagating. Moreover, many details about the design structure and the SETs characteristics may not be available at high abstraction levels. Thus, in high level analysis, many assumptions about the SETs behavior are usually made, which impacts the accuracy of the generated results. Consequently, the lowcost detection of soft errors due to SETs is very challenging and requires more sophisticated techniques
    • …
    corecore