61 research outputs found

    New developments in the theory of Groebner bases and applications to formal verification

    Get PDF
    We present foundational work on standard bases over rings and on Boolean Groebner bases in the framework of Boolean functions. The research was motivated by our collaboration with electrical engineers and computer scientists on problems arising from formal verification of digital circuits. In fact, algebraic modelling of formal verification problems is developed on the word-level as well as on the bit-level. The word-level model leads to Groebner basis in the polynomial ring over Z/2n while the bit-level model leads to Boolean Groebner bases. In addition to the theoretical foundations of both approaches, the algorithms have been implemented. Using these implementations we show that special data structures and the exploitation of symmetries make Groebner bases competitive to state-of-the-art tools from formal verification but having the advantage of being systematic and more flexible.Comment: 44 pages, 8 figures, submitted to the Special Issue of the Journal of Pure and Applied Algebr

    Highly Automated Formal Verification of Arithmetic Circuits

    Get PDF
    This dissertation investigates the problems of two distinctive formal verification techniques for verifying large scale multiplier circuits and proposes two approaches to overcome some of these problems. The first technique is equivalence checking based on recurrence relations, while the second one is the symbolic computation technique which is based on the theory of Gröbner bases. This investigation demonstrates that approaches based on symbolic computation have better scalability and more robustness than state-of-the-art equivalence checking techniques for verification of arithmetic circuits. According to this conclusion, the thesis leverages the symbolic computation technique to verify floating-point designs. It proposes a new algebraic equivalence checking, in contrast to classical combinational equivalence checking, the proposed technique is capable of checking the equivalence of two circuits which have different architectures of arithmetic units as well as control logic parts, e.g., floating-point multipliers

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Speeding up hardware verification by automated data path scaling

    Get PDF
    The main obstacle for formal hardware verification of digital circuits is formed by ever increasing design sizes and circuit complexity. Therefore, reducing runtimes and the amount of memory needed for verification computations is a major requirement in order to successfully apply formal verification techniques to industrial circuit designs. This thesis presents a new abstraction technique for Bounded Model Checking of digital circuits. The proposed technique reduces the sizes of design models before verification by implementing a fully automated scaling of data path widths. Digital circuit designs are usually given as Register-Transfer-Level (RTL) specifications, but most industrial hardware verification tools are based on bit-level methods such as SAT or BDD techniques. RTL specifications contain explicit structural information about high-level data flow and about the widths of data path signals. We introduce a one-to-one abstraction technique for RTL property checking which exploits such high-level information. Given an RTL circuit description, a scaled-down abstract model is computed in which signal widths are reduced with respect to an additionally given formal property. For each data path and each signal accessing the data path, the original width n is shrunken to the minimum width m<=n such that the property which is to be checked holds for the scaled model if and only if it holds for the original design. We furthermore provide a technique which, if the property does not hold, computes counterexamples for the original design from counterexamples found on the reduced model. Thus, the verification task can be completely carried out on the scaled-down version of the design; false-negatives cannot occur. The complexity of SAT and BDD based model checking often depends on the number of bits occurring in a design and therefore depends on the widths of the data path signals. Linear signal width reductions result in exponentially smaller state spaces. Hence, automated data path scaling can significantly speed up the runtimes of verification tools and allows larger design sizes to be handled. Experimental results on large industrial circuits demonstrate the applicability and efficiency of the proposed method

    A survey of DA techniques for PLD and FPGA based systems

    Full text link
    Programmable logic devices (PLDs) are gaining in acceptance, of late, for designing systems of all complexities ranging from glue logic to special purpose parallel machines. Higher densities and integration levels are made possible by the new breed of complex PLDs and FPGAs. The added complexities of these devices make automatic computer aided tools indispensable for achieving good performance and a high usable gate-count. In this article, we attempt to present in an unified manner, the different tools and their underlying algorithms using an example of a vending machine controller as an illustrative example. Topics covered include logic synthesis for PLDs and FPGAs along with an in-depth survey of important technology mapping, partitioning and place and route algorithms for different FPGA architectures.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/31206/1/0000108.pd

    Automated synthesis and optimization of multilevel logic circuits.

    Get PDF
    With the increased complexity of Very Large Scaled Integrated (VLSI) circuits, multilevellogic synthesis plays an even more important role due to its flexibility and compactness.The history of symbolic logic and some typical techniques for multilevel logic synthesisare reviewed. These methods include algorithmic approach; Rule-Based approach; BinaryDecision Diagram (BDD) approach; Field Programmable Gate Array(FPGA) approachand several perturbation applications.One new kind of don't cares (DCs), called functional DCs has been proposed for multilevellogic synthesis. The conventional two-level cubes are generalized to multilevel cubes.Then functional DCs are generated based on the properties of containment. The conceptof containment is more general than unateness which leads to the generation of newDCs. A separate C program has been developed to utilize the functional DCs generatedas a Boolean function is decomposed for both single output and multiple output functions.The program can produce better results than script.rugged of SIS, developed by UC Berkeley,both in area and speed in less CPU time for a number of testcases from MCNC andIWLS'93 benchmarks.In certain applications ANDjXOR (Reed-Muller) logic has shown some attractive advantagesover the standard Boolean logic based on AND JOR operations. A bidirectionalconversion algorithm between these two paradigms is presented based on the concept of polarityfor sum-of-products (SOP) Boolean functions, multiple segment and multiple pointerfacilities. Experimental results show that the algorithm is much faster than the previouslypublished programs for any fixed polarity. Based on this algorithm, a new technique calledredundancy-removal is applied to generalize the idea to very large multiple output Booleanfunctions. Results for benchmarks with up to 199 inputs and 99 outputs are presented.Applying the preceding conversion program, any Boolean functions can be expressedby fixed polarity Reed-Muller forms. There are 2n polarities for an n-variable function andthe number of product terms depends on these polarities. The problem of exact polarityminimization is computationally extensive and current programs are only suitable whenn :::; 15. Based on the comparison of the concepts of polarity in the standard Boolean logicand Reed-Muller logic, a fast algorithm is developed and implemented in C language whichcan find the best polarity for multiple output functions. Benchmark examples of up to 25inputs and 29 outputs run on a personal computer are given.After the best polarity for a Boolean function is calculated, this function can be furthersimplified using mixed polarity methods by combining the adjacent product terms. Hence,an efficient program is developed based on decomposition strategy to implement mixedpolarity minimization for both single output and very large multiple output Boolean functions.Experimental results show that the numbers of product terms are much less thanthe results produced by ESPRESSO for some categories of functions
    corecore