64 research outputs found

    Taylor Expansion Diagrams: A Canonical Representation for Verification of Data Flow Designs

    Full text link

    Highly Automated Formal Verification of Arithmetic Circuits

    Get PDF
    This dissertation investigates the problems of two distinctive formal verification techniques for verifying large scale multiplier circuits and proposes two approaches to overcome some of these problems. The first technique is equivalence checking based on recurrence relations, while the second one is the symbolic computation technique which is based on the theory of Gröbner bases. This investigation demonstrates that approaches based on symbolic computation have better scalability and more robustness than state-of-the-art equivalence checking techniques for verification of arithmetic circuits. According to this conclusion, the thesis leverages the symbolic computation technique to verify floating-point designs. It proposes a new algebraic equivalence checking, in contrast to classical combinational equivalence checking, the proposed technique is capable of checking the equivalence of two circuits which have different architectures of arithmetic units as well as control logic parts, e.g., floating-point multipliers

    Symbolic Supervisory Control of Resource Allocation Systems

    Get PDF
    <p>Supervisory control theory (SCT) is a formal model-based methodology for verification and synthesis of supervisors for discrete event systems (DES). The main goal is to guarantee that the closed-loop system fulfills given specifications. SCT has great promise to assist engineers with the generation of reliable control functions. This is, for instance, beneficial to manufacturing systems where both products and production equipment might change frequently.</p> <p>The industrial acceptance of SCT, however, has been limited for at least two reasons: (i) the analysis of DES involves an intrinsic difficulty known as the state-space explosion problem, which makes the explicit enumeration of enormous state-spaces for industrial systems intractable; (ii) the synthesized supervisor, represented as a deterministic finite automaton (FA) or an extended finite automaton (EFA), is not straightforward to implement in an industrial controller.</p> <p>In this thesis, to address the aforementioned issues, we study the modeling, synthesis and supervisor representation of DES using binary decision diagrams (BDDs), a compact data structure for representing DES models symbolically. We propose different kinds of BDD-based algorithms for exploring the symbolically represented state-spaces in an effort to improve the abilities of existing supervisor synthesis approaches to handle large-scale DES and represent the obtained supervisors appropriately.</p> <p>Following this spirit, we bring the efficiencies of BDD into a particular DES application domain -- deadlock avoidance for resource allocation systems (RAS) -- a problem that arises in many technological systems including flexible manufacturing systems and multi-threaded software. We propose a framework for the effective and computationally efficient development of the maximally permissive deadlock avoidance policy (DAP) for various RAS classes. Besides the employment of symbolic computation, special structural properties that are possessed by RAS are utilized by the symbolic algorithms to gain additional efficiencies in the computation of the sought DAP. Furthermore, to bridge the gap between the BDD-based representation of the target DAP and its actual industrial realization, we extend this work by introducing a procedure that generates a set of "guard" predicates to represent the resulting DAP.</p> <p>The work presented in this thesis has been implemented in the SCT tool Supremica. Computational benchmarks have manifested the superiority of the proposed algorithms with respect to the previously published results. Hence, the work holds a strong potential for providing robust, practical and efficient solutions to a broad range of supervisory control and deadlock avoidance problems that are experienced in the considered DES application domain.</p

    Coverage of Compositional Property Sets for Hardware and Hardware-dependent Software in Formal System-on-Chip Verification

    Get PDF
    Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification. Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed. However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA). This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment. Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints. At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems. The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces. This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS. Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties

    A uniform approach to the complexity and analysis of succinct systems

    Get PDF
    “ This thesis provides a unifying view on the succinctness of systems: the capability of a modeling formalism to describe the behavior of a system of exponential size using a polynomial syntax. The key theoretical contribution is the introduction of sequential circuit machines as a new universal computation model that focuses on succinctness as the central aspect. The thesis demonstrates that many well-known modeling formalisms such as communicating state machines, linear-time temporal logic, or timed automata exhibit an immediate connection to this machine model. Once a (syntactic) connection is established, many complexity bounds for structurally restricted sequential circuit machines can be transferred to a certain formalism in a uniform manner. As a consequence, besides a far-reaching unification of independent lines of research, we are also able to provide matching complexity bounds for various analysis problems, whose complexities were not known so far. For example, we establish matching lower and upper bounds of the small witness problem and several variants of the bounded synthesis problem for timed automata, a particularly important succinct modeling formalism. Also for timed automata, our complexity-theoretic analysis leads to the identification of tractable fragments of the timed synthesis problem under partial observability. Specifically, we identify timed controller synthesis based on discrete or template-based controllers to be equivalent to model checking. Based on this discovery, we develop a new model checking-based algorithm to efficiently find feasible template instantiations. From a more practical perspective, this thesis also studies the preservation of succinctness in analysis algorithms using symbolic data structures. While efficient techniques exist for specific forms of succinctness considered in isolation, we present a general approach based on abstraction refinement to combine off-the-shelf symbolic data structures. In particular, for handling the combination of concurrency and quantitative timing behavior in networks of timed automata, we report on the tool Synthia which combines binary decision diagrams with difference bound matrices. In a comparison with the timed model checker Uppaal and the timed game solver Tiga running on standard benchmarks from the timed model checking and synthesis domain, respectively, the experimental results clearly demonstrate the effectiveness of our new approach.Diese Dissertation liefert eine vereinheitlichende Sicht auf die Kompaktheit von Systemen: die Fähigkeit eines Modellierungsformalismus, das Verhalten eines Systems exponentieller Größe mit polynomieller Syntax zu beschreiben. Der wesentliche theoretische Beitrag ist die Einführung von sequenziellen Schaltkreis-Maschinen als neues universelles Berechnungsmodell, das sich auf den zentralen Aspekt der Kompaktheit konzentriert. Die Dissertation demonstriert, dass viele bekannte Modellierungsformalismen, wie z.B. kommunizierende Zustandsmaschinen, linear-Zeit temporale Logik (LTL) oder gezeitete Automaten eine direkte Verbindung zu diesem Maschinenmodell aufzeigen. Sobald eine (syntaktische) Verbindung hergestellt ist, können viele Komplexitätsschranken für strukturell beschränkte sequenzielle Schaltkreis-Maschinen für einen bestimmten Formalismus einheitlich übernommen werden. Neben einer weitreichenden Vereinheitlichung unabhängiger Forschungsrichtungen können auch zahlreiche Komplexitätsschranken für Analyse-Probleme etabliert werden, deren genaue Komplexität bisher noch nicht bekannt war. Zum Beispiel werden passende untere und obere Schranken des small witness Problems und mehrere Varianten des Synthese-Problems von Controllern mit beschränkter Größe für gezeitete Automaten bewiesen. Die theoretische Analyse deckt Fragmente geringerer Komplexität des partiell informierten Syntheseproblems für gezeitete Automaten auf. Es wird im Besonderen gezeigt, dass das gezeitete Syntheseproblem für diskrete oder Vorlagen-basierte Controller äquivalent zum Model Checking-Problem ist. Basierend auf dieser Einsicht wird ein neuartiger Model Checking-basierter Algorithmus zur effizienten Synthese von gültigen Instantiierungen von Vorlagen entwickelt. Der praktische Beitrag der Dissertation untersucht die Erhaltung von Kompaktheit in Analyse-Algorithmen durch die Benutzung symbolischer Datenstrukturen. Es wird ein allgemeiner Ansatz zur Kombination von Standard-Datenstrukturen vorgestellt, die jeweils bisher nur in Isolation verwendet werden konnten. Insbesondere wird für die Analyse von Netzwerken von gezeiteten Automaten das Tool Synthia vorgestellt, welches binäre Entscheidungs-Diagramme mit Differenzen-Matrizen verbindet. In einem experimentellen Vergleich mit den Tools Uppaal und Tiga wird klar die Effektivität des neuen Ansatzes belegt

    Advances in Functional Decomposition: Theory and Applications

    Get PDF
    Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research

    A survey of DA techniques for PLD and FPGA based systems

    Full text link
    Programmable logic devices (PLDs) are gaining in acceptance, of late, for designing systems of all complexities ranging from glue logic to special purpose parallel machines. Higher densities and integration levels are made possible by the new breed of complex PLDs and FPGAs. The added complexities of these devices make automatic computer aided tools indispensable for achieving good performance and a high usable gate-count. In this article, we attempt to present in an unified manner, the different tools and their underlying algorithms using an example of a vending machine controller as an illustrative example. Topics covered include logic synthesis for PLDs and FPGAs along with an in-depth survey of important technology mapping, partitioning and place and route algorithms for different FPGA architectures.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/31206/1/0000108.pd
    corecore