288 research outputs found

    Multi-core and/or Symbolic Model Checking

    Get PDF
    We review our progress in high-performance model checking. Our multi-core model checker is based on a scalable hash-table design and parallel random-walk traversal. Our symbolic model checker is based on Multiway Decision Diagrams and the saturation strategy. The LTSmin tool is based on the PINS architecture, decoupling model checking algorithms from the input specification language. Consequently, users can stay in their own specification language and postpone the choice between parallel or symbolic model checking. We support widely different specification languages including those of SPIN (Promela), mCRL2 and UPPAAL (timed automata). So far, multi-core and symbolic algorithms had very little in common, forcing the user in the end to make a wise trade-off between memory or speed. Recently, however, we designed a novel multi-core BDD package called Sylvan. This forms an excellent basis for scalable parallel symbolic model checking

    Supporting Abstraction when Model Checking ASM

    Get PDF
    Model checking as a method for automatic tool support for verification highly stimulates industry's interests. It is limited, however, with respect to the size of the systems' state space. In earlier work, we developed an interface between the ASM Workbench and the SMV model checker that allows model checking of finite ASM models. In this work, we add a means for abstraction in case the model to be checked is infinite and therefore not feasible for the model checking approach. We facilitate the ASM specification language (ASM-SL) with a notion for abstract types and introduce an interface between ASM-SL and Multiway Decision Graphs (MDGs). MDGs are capable of representing transition systems with abstract types and functions and provide the functionality necessary for symbolic model checking. Our interface maps abstract ASM models into MDGs in a semantic preserving way. It provides a very simple means for generating abstract models that are infinite but can be checked by a model checker based on MDGs

    The verification of MDG algorithms in the HOL theorem prover

    Get PDF
    Formal verification of digital systems is achieved, today, using one of two main approaches: states exploration (mainly model checking and equivalence checking) or deductive reasoning (theorem proving). Indeed, the combination of the two approaches, states exploration and deductive reasoning promises to overcome the limitation and to enhance the capabilities of each. Our research is motivated by this goal. In this thesis, we provide the entire necessary infrastructure (data structure + algorithms) to define high level states exploration in the HOL theorem prover named as MDG-HOL platform. While related work has tackled the same problem by representing primitive Binary Decision Diagram (BDD) operations as inference rules added to the core of the theorem prover, we have based our approach on the Multiway Decision Graphs (MDGs). MDG generalizes ROBDD to represent and manipulate a subset of first-order logic formulae. With MDGs, a data value is represented by a single variable of an abstract type and operations on data are represented in terms of uninterpreted function. Considering MDGs instead of BDDs will raise the abstraction level of what can be verified using a state exploration within a theorem prover. The MDGs embedding is based on the logical formulation of an MDG as a Directed Formulae (DF). The DF syntax is defined as HOL built-in data types. We formalize the basic MDG operations using this syntax within HOL following a deep embedding approach. Such approach ensures the consistency of our embedding. Then, we derive the correctness proof for each MDG basic operator. Based on this platform, the MDG reachability analysis is defined in HOL as a conversion that uses the MDG theory within HOL. Then, we demonstrate the effectiveness of our platform by considering four case studies. Our obtained results show that this verification framework offers a considerable gain in terms of automation without sacrificing CPU time and memory usage compared to automatic model checker tools. Finally, we propose a reduction technique to improve MDGs model checking based on the MDG-HOL platform. The idea is to prune the transition relation of the circuits using pre-proved theorems and lemmas from the specification given at system level. We also use the consistency of the specifications to verify if the reduced model is faithful to the original one. We provide two case studies, the first one is the reduction using SAT-MDG of an Island Tunnel Controller and the second one is the MDG-HOL assume-guarantee reduction of the Look-Aside Interface. The obtained results of our approach offers a considerable gain in terms of heuristics and reduction techniques correctness as to commercial model checking; however a small penalty is paid in terms of CPU time and memory usag

    Integrating MDG variable ordering in a VHDL-MDG design verification system

    Full text link
    Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal

    Integrating SAT with MDG for Efficient Invariant Checking

    Get PDF
    Multiway Decision Graph (MDG) is a canonical representation of a subset of many-sorted first-order logic. It generalizes the logic of equality with abstract types and uninterpreted function symbols. The area of Satisfiability (SAT) has been the subject of intensive research in recent years, with significant theoretical and practical contributions. From a practical perspective, a large number of very effective SAT solvers have recently been proposed, most of which based on improvements made to the original Davis-Putnam algorithm. Local search algorithms have allowed solving extremely large satisfiable instances of SAT. The combination between various verification methodologies will enhance the capabilities of each and overcome their limitations. In this thesis, we introduce a methodology and propose a new design verification tool integrating MDG and SAT, to check the safety of a design by invariant checking. Using MDG to encode the set of states provide powerful mean of abstraction. We use SAT solver searching for paths of reachable states violating the property under certain encoding constraints. In addition, we also introduce an automated conversion-verification methodology to convert a Directed Formula (DF) into Conjunctive Normal Form (CNF) formula that can be fed to a SAT solver. The formal verification of this conversion is conducted within the HOL theorem prover. Finally, we implement and conduct experiment on some examples along with a case study to show the correctness and the efficiency of our approach

    Emerging trends proceedings of the 17th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2004

    Get PDF
    technical reportThis volume constitutes the proceedings of the Emerging Trends track of the 17th International Conference on Theorem Proving in Higher Order Logics (TPHOLs 2004) held September 14-17, 2004 in Park City, Utah, USA. The TPHOLs conference covers all aspects of theorem proving in higher order logics as well as related topics in theorem proving and verification. There were 42 papers submitted to TPHOLs 2004 in the full research cate- gory, each of which was refereed by at least 3 reviewers selected by the program committee. Of these submissions, 21 were accepted for presentation at the con- ference and publication in volume 3223 of Springer?s Lecture Notes in Computer Science series. In keeping with longstanding tradition, TPHOLs 2004 also offered a venue for the presentation of work in progress, where researchers invite discussion by means of a brief introductory talk and then discuss their work at a poster session. The work-in-progress papers are held in this volume, which is published as a 2004 technical report of the School of Computing at the University of Utah

    Providing a formal linkage between MDG and HOL based on a verified MDG system.

    Get PDF
    Formal verification techniques can be classified into two categories: deductive theorem proving and symbolic state enumeration. Each method has complementary advantages and disadvantages. In general, theorem provers are high reliability systems. They can be applied to the expressive formalisms that are capable of modelling complex designs such as processors. However, theorem provers use a glass-box approach. To complete a verification, it is necessary to understand the internal structure in detail. The learning curve is very steep and modeling and verifying a system is very time-consuming. In contrast, symbolic state enumeration tools use a black-box approach. When verifying a design, the user does not need to understand its internal structure. Their advantages are their speed and ease of use. But they can only be used to prove relatively simple designs and the system security is much lower than the theorem proving system. Many hybrid tools have been developed to reap the benefits of both theorem proving Systems and symbolic state enumeration Systems. Normally, the verification results from one system are translated to another system. In other words, there is a linkage between the two Systems. However, how can we ensure that this linkage can be trusted? How can we ensure the verification system itself is correct? The contribution of this thesis is that we have produced a methodology which can provide a formal linkage between a symbolic state enumeration system and a theorem proving system based on a verified symbolic state enumeration system. The methodology has been partly realized in two simplified versions of the MDG system (a symbolic state enumeration system) and the HOL system (a theorem proving system) which involves the following three steps. First, we have verified aspects of correctness of two simplified versions of the MDG system. We have made certain that the semantics of a program is preserved in those of its translated form. Secondly, we have provided a formal linkage between the MDG system and the HOL system based on importing theorems. The MDG verification results can be formally imported into HOL to form the HOL theorems. Thirdly, we have combined the translator correctness theorems with the importing theorems. This combination allows the low level MDG verification results to be imported into HOL in terms of the semantics of a high level language (MDG-HDL). We have also summarized a general method which is used to prove the existential theorem for the specification and implementation of the design. The feasibility of this approach has been demonstrated in a case study: the verification of the correctness and usability theorems of a vending machine

    Model reductions in MDG-based model checking

    Full text link
    Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal

    NuMDG: A New Tool for Multiway Decision Graphs Construction

    Get PDF
    Multiway Decision Graphs (MDGs) are a canonical representation of a subset of many-sorted first-order logic. This subset generalizes the logic of equality with abstract types and uninterpreted function symbols. The distinction between abstract and concrete sorts mirrors the hardware distinction between data path and control. Here we consider ways to improve MDGs construction. Efficiency is achieved through the use of the Generalized-If-Then-Else (GITE) commonly operator in Binary Decision Diagram packages. Consequently, we review the main algorithms used for MDGs verification techniques. In particular, Relational Product and Pruning by Subsumption are algorithms defined uniformly through this single GITE operator which will lead to a more efficient implementation. Moreover, we provide their correctness proof. This work can be viewed as a way to accommodate the ROBBD algorithms to the realm of abstract sorts and uninterpreted functions. The new tool, called NuMDG, accepts an extended SMV language, supporting abstract data sorts. Finally, we present experimental results demonstrating the efficiency of the NuMDG tool and evaluating its performance using a set of benchmarks from the SMV package

    Multilevel Modeling, Formal Analysis, and Characterization of Single Event Transients Propagation in Digital Systems

    Get PDF
    RÉSUMÉ La croissance exponentielle du nombre de transistors par puce a apporté des progrès considérables aux performances et fonctionnalités des dispositifs semi-conducteurs avec une miniaturisation des dimensions physiques ainsi qu’une augmentation de vitesse. De nos jours, les appareils électroniques utilisés dans un large éventail d’applications telles que les systèmes de divertissement personnels, l’industrie automobile, les systèmes électroniques médicaux, et le secteur financier ont changé notre façon de vivre. Cependant, des études récentes ont démontré que le rétrécissement permanent de la taille des transistors qui s’approchent des dimensions nanométriques fait surgir des défis majeurs. La réduction de la fiabilité au sens large (c.-à-d., la capacité à fournir la fonction attendue) est l’un d’entre eux. Lorsqu’un système est conçu avec une technologie avancée, on s’attend à ce qu’ il connaît plus de défaillances dans sa durée de vie. De telles défaillances peuvent avoir des conséquences graves allant des pertes financières aux pertes humaines. Les erreurs douces induites par la radiation, qui sont apparues d’abord comme une source de panne plutôt exotique causant des anomalies dans les satellites, sont devenues l’un des problèmes les plus difficiles qui influencent la fiabilité des systèmes microélectroniques modernes, y compris les dispositifs terrestres. Dans le secteur médical par exemple, les erreurs douces ont été responsables de l’échec et du rappel de plusieurs stimulateurs cardiaques implantables. En fonction du transistor affecté lors de la fabrication, le passage d’une particule peut induire des perturbations isolées qui se manifestent comme un basculement du contenu d’une cellule de mémoire (c.-à-d., Single Event Upsets (SEU)) ou un changement temporaire de la sortie (sous forme de bruit) dans la logique combinatoire (c.-à-d., Single Event Transients (SETs)). Les SEU ont été largement étudiés au cours des trois dernières décennies, car ils étaient considérés comme la cause principale des erreurs douces. Néanmoins, des études expérimentales ont montré qu’avec plus de miniaturisation technologique, la contribution des SET au taux d’erreurs douces est remarquable et qu’elle peut même dépasser celui des SEU dans les systèmes à haute fréquence [1], [2]. Afin de minimiser l’impact des erreurs douces, l’effet des SET doit être modélisé, prédit et atténué. Toutefois, malgré les progrès considérables accomplis dans la vérification fonctionnelle des circuits numériques, il y a eu très peu de progrès en matiàre de vérification non-fonctionnelle (par exemple, l’analyse des erreurs douces). Ceci est dû au fait que la modélisation et l’analyse des propriétés non-fonctionnelles des SET pose un grand défi. Cela est lié à la nature aléatoire des défauts et à la difficulté de modéliser la variation de leurs caractéristiques lorsqu’ils se propagent.----------ABSTRACT The exponential growth in the number of transistors per chip brought tremendous progress in the performance and the functionality of semiconductor devices associated with reduced physical dimensions and higher speed. Electronic devices used in a wide range of applications such as personal entertainment systems, automotive industry, medical electronic systems, and financial sector changed the way we live nowadays. However, recent studies reveal that further downscaling of the transistor size at nano-scale technology leads to major challenges. Reliability (i.e., ability to provide intended functionality) is one of them, where a system designed in nano-scale nodes is expected to experience more failures in its lifetime than if it was designed using larger technology node size. Such failures can lead to serious conséquences ranging from financial losses to even loss of human life. Soft errors induced by radiation, which were initially considered as a rather exotic failure mechanism causing anomalies in satellites, have become one of the most challenging issues that impact the reliability of modern microelectronic systems, including devices at terrestrial altitudes. For instance, in the medical industry, soft errors have been responsible of the failure and recall of many implantable cardiac pacemakers. Depending on the affected transistor in the design, a particle strike can manifest as a bit flip in a state element (i.e., Single Event Upset (SEU)) or temporally change the output of a combinational gate (i.e., Single Event Transients (SETs)). Initially, SEUs have been widely studied over the last three decades as they were considered to be the main source of soft errors. However, recent experiments show that with further technology downscaling, the contribution of SETs to the overall soft error rate is remarkable and in high frequency systems, it might exceed that of SEUs [1], [2]. In order to minimize the impact of soft errors, the impact of SETs needs to be modeled, predicted, and mitigated. However, despite considerable progress towards developing efficient methodologies for the functional verification of digital designs, advances in non-functional verification (e.g., soft error analysis) have been lagging. This is due to the fact that the modeling and analysis of non-functional properties related to SETs is very challenging. This can be related to the random nature of these faults and the difficulty of modeling the variation in its characteristics while propagating. Moreover, many details about the design structure and the SETs characteristics may not be available at high abstraction levels. Thus, in high level analysis, many assumptions about the SETs behavior are usually made, which impacts the accuracy of the generated results. Consequently, the lowcost detection of soft errors due to SETs is very challenging and requires more sophisticated techniques
    • …
    corecore