280 research outputs found

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    AllSAT for Combinational Circuits

    Get PDF

    Unbounded Scalable Hardware Verification.

    Full text link
    Model checking is a formal verification method that has been successfully applied to real-world hardware and software designs. Model checking tools, however, encounter the so-called state-explosion problem, since the size of the state spaces of such designs is exponential in the number of their state elements. In this thesis, we address this problem by exploiting the power of two complementary approaches: (a) counterexample-guided abstraction and refinement (CEGAR) of the design's datapath; and (b) the recently-introduced incremental induction algorithms for approximate reachability. These approaches are well-suited for the verification of control-centric properties in hardware designs consisting of wide datapaths and complex control logic. They also handle most complex design errors in typical hardware designs. Datapath abstraction prunes irrelevant bit-level details of datapath elements, thus greatly reducing the size of the state space that must be analyzed and allowing the verification to be focused on the control logic, where most errors originate. The induction-based approximate reachability algorithms offer the potential of significantly reducing the number of iterations needed to prove/disprove given properties by avoiding the implicit or explicit enumeration of reachable states. Our implementation of this verification framework, which we call the Averroes system, extends the approximate reachability algorithms at the bit level to first-order logic with equality and uninterpreted functions. To facilitate this extension, we formally define the solution space and state space of the abstract transition system produced by datapath abstraction. In addition, we develop an efficient way to represent sets of abstract solutions involving present- and next-states and a systematic way to project such solutions onto the space of just the present-state variables. To further increase the scalability of the Averroes verification system, we introduce the notion of structural abstraction, which extends datapath abstraction with two optimizations for better classification of state variables as either datapath or control, and with efficient memory abstraction techniques. We demonstrate the scalability of this approach by showing that Averroes significantly outperforms bit-level verification on a number of industrial benchmarks.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133375/1/suholee_1.pd

    Cell libraries and verification

    Get PDF
    Digital electronic devices are often implemented using cell libraries to provide the basic logic elements, such as Boolean functions and on-chip memories. To be usable both during the development of chips, which is usually done in a hardware definition language, and for the final layout, which consists of lithographic masks, cells are described in multiple ways. Among these, there are multiple descriptions of the behavior of cells, for example one at the level of hardware definition languages, and another one in terms of transistors that are ultimately produced. Thus, correct functioning of the device depends also on the correctness of the cell library, requiring all views of a cell to correspond with each other. In this thesis, techniques are presented to verify some of these correspondences in cell libraries. First, a technique is presented to check that the functional description in a hardware definition language and the transistor netlist description implement the same behavior. For this purpose, a semantics is defined for the commonly used subset of the hardware definition language Verilog. This semantics is encoded into Boolean equations, which can also be extracted from a transistor netlist. A model checker is then used to prove equivalence of these two descriptions, or to provide a counterexample showing that they are different. Also in basic elements such as cells, there exists non-determinism reflecting internal behavior that cannot be controlled from the outside. It is however desired that such internal behavior does not lead to different externally observable behavior, i.e., to different computation results. This thesis presents a technique to efficiently check, both for hardware definition language descriptions and transistor netlist descriptions, whether non-determinism does have an effect on the observable computation or not. Power consumption of chips has become a very important topic, especially since devices become mobile and therefore are battery powered. Thus, in order to predict and to maximize battery life, the power consumption of cells should be measured and reduced in an efficient way. To achieve these goals, this thesis also takes the power consumption into account when analyzing non-deterministic behavior. Then, on the one hand, behaviors consuming the same amount of power have to be measured only once. On the other hand, functionally equivalent computations can be forced to consume the least amount of power without affecting the externally observable behavior of the cell, for example by introducing appropriate delays. A way to prevent externally observable non-deterministic behavior in practical hardware designs is by adding timing checks. These checks rule out certain input patterns which must not be generated by the environment of a cell. If an input pattern can be found that is not forbidden by any of the timing checks, yet allows non-deterministic behavior, then the cell’s environment is not sufficiently restricted and hence this usually indicates a forgotten timing check. Therefore, the check for non-determinism is extended to also respect these timing checks and to consider only counterexamples that are not ruled out. If such a counterexample can be found, then it gives an indication what timing checks need to be added. Because current hardware designs run at very high speeds, timing analysis of cells has become a very important issue. For this purpose, cell libraries include a description of the delay arcs present in a cell, giving an amount of time it takes for an input change to have propagated to the outputs of a cell. Also for these descriptions, it is desired that they reflect the actual behavior in the cell. On the one hand, a delay arc that never manifests itself may result in a clock frequency that is lower than necessary. On the other hand, a forgotten delay arc can cause the clock frequency being too high, impairing functioning of the final chip. To relate the functional description of a cell with its timing specification, this thesis presents techniques to check whether delay arcs are consistent with the functionality, and which list all possible delay arcs. Computing new output values of a cell given some new input values requires all connections among the transistors in a cell to obtain stable values. Hitherto it was assumed that such a stable situation will always be reached eventually. To actually check this, a wire is abstracted into a sequence of stable values. Using this abstraction, checking whether stable situations are always reached is reduced to analyzing that an infinite sequence of such stable values exists. This is known in the term rewriting literature as productivity, the infinitary equivalent to termination. The final contribution in this thesis are techniques to automatically prove productivity. For this purpose, existing termination proving tools for term rewriting are re-used to benefit from their tremendous strength and their continuous improvements

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Abstraction and Refinement Techniques for Ternary Symbolic Simulation with Guard-value Encoding

    Get PDF
    We propose a novel encoding called guard-value encoding for the ternary domain {0, 1, X}. Among the advantages it has over the more conventional dual-rail encoding, the flexibility of representing X with either of or is especially important. We develop data abstraction and memory abstraction techniques based on the guard-value encoding. Our data abstraction reduces much more of the state space than conventional ternary abstraction's approach of over-approximating a set of Boolean values with a smaller set of ternary values. We also show how our data abstraction can enable bit-width reduction which helps further simplify verification problems. Our memory abstraction is applicable to any array of elements which makes it much more general than the existing memory abstraction techniques. We show how our memory abstraction can effectively reduce an array to just a few elements even when existing approaches are not applicable. We make extensive use of symbolic indexing to construct symbolic ternary values which are used in symbolic simulation. Lastly, we give a new perspective on refinement for ternary abstraction. Refinement is needed when too much information is lost due to use of the ternary domain such that the property is evaluated to the unknown X. We present a collection of new refinement approaches that distinguish themselves from existing ones by modifying the transition function instead of the initial ternary state and ternary stimulus. This way, our refinement either preserves the abstraction level or only degrades it slightly. We demonstrate our proposed techniques with a wide range of designs and properties. With data abstraction, we usually observe at least 10X improvement in verification time compared to Boolean verification algorithms such as Boolean Bounded Model Checking (BMC), as well as usually at least 2X and often 10X improvement over conventional ternary abstraction. Our memory abstraction significantly improves how the verification time scales with the design parameters and the depth (the number of cycles) of the verification. Our refinement approaches are also demonstrated to be much better than existing ones most of the time. For example, when verifying a property of a synthetic example based on a superscalar microprocessor's bypass paths, with our data abstraction, it takes 505 seconds while both of ternary abstraction and BMC time out at 1800 seconds. The bit-width reduction can further save 44 seconds and our memory abstraction can save 237 seconds. This verification problem requires refinement. If we substitute our refinement with an existing approach, the verification time with the data abstraction doubles

    Robust recognition and exploratory analysis of crystal structures using machine learning

    Get PDF
    In den Materialwissenschaften läuten Künstliche-Intelligenz Methoden einen Paradigmenwechsel in Richtung Big-data zentrierter Forschung ein. Datenbanken mit Millionen von Einträgen, sowie hochauflösende Experimente, z.B. Elektronenmikroskopie, enthalten eine Fülle wachsender Information. Um diese ungenützten, wertvollen Daten für die Entdeckung verborgener Muster und Physik zu nutzen, müssen automatische analytische Methoden entwickelt werden. Die Kristallstruktur-Klassifizierung ist essentiell für die Charakterisierung eines Materials. Vorhandene Daten bieten vielfältige atomare Strukturen, enthalten jedoch oft Defekte und sind unvollständig. Eine geeignete Methode sollte diesbezüglich robust sein und gleichzeitig viele Systeme klassifizieren können, was für verfügbare Methoden nicht zutrifft. In dieser Arbeit entwickeln wir ARISE, eine Methode, die auf Bayesian deep learning basiert und mehr als 100 Strukturklassen robust und ohne festzulegende Schwellwerte klassifiziert. Die einfach erweiterbare Strukturauswahl ist breit gefächert und umfasst nicht nur Bulk-, sondern auch zwei- und ein-dimensionale Systeme. Für die lokale Untersuchung von großen, polykristallinen Systemen, führen wir die strided pattern matching Methode ein. Obwohl nur auf perfekte Strukturen trainiert, kann ARISE stark gestörte mono- und polykristalline Systeme synthetischen als auch experimentellen Ursprungs charakterisieren. Das Model basiert auf Bayesian deep learning und ist somit probabilistisch, was die systematische Berechnung von Unsicherheiten erlaubt, welche mit der Kristallordnung von metallischen Nanopartikeln in Elektronentomographie-Experimenten korrelieren. Die Anwendung von unüberwachtem Lernen auf interne Darstellungen des neuronalen Netzes enthüllt Korngrenzen und nicht ersichtliche Regionen, die über interpretierbare geometrische Eigenschaften verknüpft sind. Diese Arbeit ermöglicht die Analyse atomarer Strukturen mit starken Rauschquellen auf bisher nicht mögliche Weise.In materials science, artificial-intelligence tools are driving a paradigm shift towards big data-centric research. Large computational databases with millions of entries and high-resolution experiments such as electron microscopy contain large and growing amount of information. To leverage this under-utilized - yet very valuable - data, automatic analytical methods need to be developed. The classification of the crystal structure of a material is essential for its characterization. The available data is structurally diverse but often defective and incomplete. A suitable method should therefore be robust with respect to sources of inaccuracy, while being able to treat multiple systems. Available methods do not fulfill both criteria at the same time. In this work, we introduce ARISE, a Bayesian-deep-learning based framework that can treat more than 100 structural classes in robust fashion, without any predefined threshold. The selection of structural classes, which can be easily extended on demand, encompasses a wide range of materials, in particular, not only bulk but also two- and one-dimensional systems. For the local study of large, polycrystalline samples, we extend ARISE by introducing so-called strided pattern matching. While being trained on ideal structures only, ARISE correctly characterizes strongly perturbed single- and polycrystalline systems, from both synthetic and experimental resources. The probabilistic nature of the Bayesian-deep-learning model allows to obtain principled uncertainty estimates which are found to be correlated with crystalline order of metallic nanoparticles in electron-tomography experiments. Applying unsupervised learning to the internal neural-network representations reveals grain boundaries and (unapparent) structural regions sharing easily interpretable geometrical properties. This work enables the hitherto hindered analysis of noisy atomic structural data

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Automated incremental software verification

    Get PDF
    Software continuously evolves to meet rapidly changing human needs. Each evolved transformation of a program is expected to preserve important correctness and security properties. Aiming to assure program correctness after a change, formal verification techniques, such as Software Model Checking, have recently benefited from fully automated solutions based on symbolic reasoning and abstraction. However, the majority of the state-of-the-art model checkers are designed that each new software version has to be verified from scratch. In this dissertation, we investigate the new Formal Incremental Verification (FIV) techniques that aim at making software analysis more efficient by reusing invested efforts between verification runs. In order to show that FIV can be built on the top of different verification techniques, we focus on three complementary approaches to automated formal verification. First, we contribute the FIV technique for SAT-based Bounded Model Checking developed to verify programs with (possibly recursive) functions with respect to the set of pre-defined assertions. We present the function-summarization framework based on Craig interpolation that allows extracting and reusing over- approximations of the function behaviors. We introduce the algorithm to revalidate the summaries of one program locally in order to prevent re-verification of another program from scratch. Second, we contribute the technique for simulation relation synthesis for loop-free programs that do not necessarily contain assertions. We introduce an SMT-based abstraction- refinement algorithm that proceeds by guessing a relation and checking whether it is a simulation relation. We present a novel algorithm for discovering simulations symbolically, by means of solving ∀∃-formulas and extracting witnessing Skolem relations. Third, we contribute the FIV technique for SMT-based Unbounded Model Checking developed to verify programs with (possibly nested) loops. We present an algorithm that automatically derives simulations between programs with different loop structures. The automatically synthesized simulation relation is then used to migrate the safe inductive invariants across the evolution boundaries. Finally, we contribute the implementation and evaluation of all our algorithmic contributions, and confirm that the state-of-the-art model checking tools can successfully be extended by the FIV capabilities
    corecore