3,095 research outputs found
Recommended from our members
Formal Analysis of Arithmetic Circuits using Computer Algebra - Verification, Abstraction and Reverse Engineering
Despite a considerable progress in verification and abstraction of random and control logic, advances in formal verification of arithmetic designs have been lagging. This can be attributed mostly to the difficulty in an efficient modeling of arithmetic circuits and datapaths without resorting to computationally expensive Boolean methods, such as Binary Decision Diagrams (BDDs) and Boolean Satisfiability (SAT), that require “bit blasting”, i.e., flattening the design to a bit-level netlist. Approaches that rely on computer algebra and Satisfiability Modulo Theories (SMT) methods are either too abstract to handle the bit-level nature of arithmetic designs or require solving computationally expensive decision or satisfiability problems. The work proposed in this thesis aims at overcoming the limitations of analyzing arithmetic circuits, specifically at the post-synthesized phase. It addresses the verification, abstraction and reverse engineering problems of arithmetic circuits at an algebraic level, treating an arithmetic circuit and its specification as a properly constructed algebraic system. The proposed technique solves these problems by function extraction, i.e., by deriving arithmetic function computed by the circuit from its low-level circuit implementation using computer algebraic rewriting technique. The proposed techniques work on large integer arithmetic circuits and finite field arithmetic circuits, up to 512-bit wide containing millions of logic gates
Bit-Vector Model Counting using Statistical Estimation
Approximate model counting for bit-vector SMT formulas (generalizing \#SAT)
has many applications such as probabilistic inference and quantitative
information-flow security, but it is computationally difficult. Adding random
parity constraints (XOR streamlining) and then checking satisfiability is an
effective approximation technique, but it requires a prior hypothesis about the
model count to produce useful results. We propose an approach inspired by
statistical estimation to continually refine a probabilistic estimate of the
model count for a formula, so that each XOR-streamlined query yields as much
information as possible. We implement this approach, with an approximate
probability model, as a wrapper around an off-the-shelf SMT solver or SAT
solver. Experimental results show that the implementation is faster than the
most similar previous approaches which used simpler refinement strategies. The
technique also lets us model count formulas over floating-point constraints,
which we demonstrate with an application to a vulnerability in differential
privacy mechanisms
Recommended from our members
Function Verification of Combinational Arithmetic Circuits
Hardware design verification is the most challenging part in overall hardware design process. It is because design size and complexity are growing very fast while the requirement for performance is ever higher. Conventional simulation-based verification method cannot keep up with the rapid increase in the design size, since it is impossible to exhaustively test all input vectors of a complex design. An important part of hardware verification is combinational arithmetic circuit verification. It draws a lot of attention because flattening the design into bit-level, known as the bit-blasting problem, hinders the efficiency of many current formal techniques. The goal of this thesis is to introduce a robust and efficient formal verification method for combinational integer arithmetic circuit based on an in-depth analysis of recent advances in computer algebra. The method proposed here solves the verification problem at bit level, while avoiding bit-blasting problem. It also avoids the expensive Groebner basis computation, typically employed by symbolic computer algebra methods. The proposed method verifies the gate-level implementation of the design by representing the design components (logic gates and arithmetic modules) by polynomials in Z2n . It then transforms the polynomial representing the output bits (called “output signature”) into a unique polynomial in input signals (called “input signature”) using gate-level information of the design. The computed input signature is then compared with the reference input signature (golden model) to determine whether the circuit behaves as anticipated. If the reference input signature is not given, our method can be used to compute (or extract) the arithmetic function of the design by computing its input signature. Additional tools, based on canonical word-level design representations (such as TED or BMD) can be used to determine the function of the computed input signature represents. We demonstrate the applicability of the proposed method to arithmetic circuit verification on a large number of designs
Design, Verification, Test and In-Field Implications of Approximate Computing Systems
Today, the concept of approximation in computing is becoming more and more a “hot topic” to investigate how computing systems can be more energy efficient, faster, and less complex. Intuitively, instead of performing exact computations and, consequently, requiring a high amount of resources, Approximate Computing aims at selectively relaxing the specifications, trading accuracy off for efficiency. While Approximate Computing gives several promises when looking at systems’ performance, energy efficiency and complexity, it poses significant challenges regarding the design, the verification, the test and the in-field reliability of Approximate Computing systems. This tutorial paper covers these aspects leveraging the experience of the authors in the field to present state-of-the-art solutions to apply during the different development phases of an Approximate Computing system
Doctor of Philosophy
dissertationFormal verification of hardware designs has become an essential component of the overall system design flow. The designs are generally modeled as finite state machines, on which property and equivalence checking problems are solved for verification. Reachability analysis forms the core of these techniques. However, increasing size and complexity of the circuits causes the state explosion problem. Abstraction is the key to tackling the scalability challenges. This dissertation presents new techniques for word-level abstraction with applications in sequential design verification. By bundling together k bit-level state-variables into one word-level constraint expression, the state-space is construed as solutions (variety) to a set of polynomial constraints (ideal), modeled over the finite (Galois) field of 2^k elements. Subsequently, techniques from algebraic geometry -- notably, Groebner basis theory and technology -- are researched to perform reachability analysis and verification of sequential circuits. This approach adds a "word-level dimension" to state-space abstraction and verification to make the process more efficient. While algebraic geometry provides powerful abstraction and reasoning capabilities, the algorithms exhibit high computational complexity. In the dissertation, we show that by analyzing the constraints, it is possible to obtain more insights about the polynomial ideals, which can be exploited to overcome the complexity. Using our algorithm design and implementations, we demonstrate how to perform reachability analysis of finite-state machines purely at the word level. Using this concept, we perform scalable verification of sequential arithmetic circuits. As contemporary approaches make use of resolution proofs and unsatisfiable cores for state-space abstraction, we introduce the algebraic geometry analog of unsatisfiable cores, and present algorithms to extract and refine unsatisfiable cores of polynomial ideals. Experiments are performed to demonstrate the efficacy of our approaches
Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis
Even with impressive advances in automated formal methods, certain problems
in system verification and synthesis remain challenging. Examples include the
verification of quantitative properties of software involving constraints on
timing and energy consumption, and the automatic synthesis of systems from
specifications. The major challenges include environment modeling,
incompleteness in specifications, and the complexity of underlying decision
problems.
This position paper proposes sciduction, an approach to tackle these
challenges by integrating inductive inference, deductive reasoning, and
structure hypotheses. Deductive reasoning, which leads from general rules or
concepts to conclusions about specific problem instances, includes techniques
such as logical inference and constraint solving. Inductive inference, which
generalizes from specific instances to yield a concept, includes algorithmic
learning from examples. Structure hypotheses are used to define the class of
artifacts, such as invariants or program fragments, generated during
verification or synthesis. Sciduction constrains inductive and deductive
reasoning using structure hypotheses, and actively combines inductive and
deductive reasoning: for instance, deductive techniques generate examples for
learning, and inductive reasoning is used to guide the deductive engines.
We illustrate this approach with three applications: (i) timing analysis of
software; (ii) synthesis of loop-free programs, and (iii) controller synthesis
for hybrid systems. Some future applications are also discussed
Doctor of Philosophy
dissertationWith the spread of internet and mobile devices, transferring information safely and securely has become more important than ever. Finite fields have widespread applications in such domains, such as in cryptography, error correction codes, among many others. In most finite field applications, the field size - and therefore the bit-width of the operands - can be very large. The high complexity of arithmetic operations over such large fields requires circuits to be (semi-) custom designed. This raises the potential for errors/bugs in the implementation, which can be maliciously exploited and can compromise the security of such systems. Formal verification of finite field arithmetic circuits has therefore become an imperative. This dissertation targets the problem of formal verification of hardware implementations of combinational arithmetic circuits over finite fields of the type F2k . Two specific problems are addressed: i) verifying the correctness of a custom-designed arithmetic circuit implementation against a given word-level polynomial specification over F2k ; and ii) gate-level equivalence checking of two different arithmetic circuit implementations. This dissertation proposes polynomial abstractions over finite fields to model and represent the circuit constraints. Subsequently, decision procedures based on modern computer algebra techniques - notably, Gr¨obner bases-related theory and technology - are engineered to solve the verification problem efficiently. The arithmetic circuit is modeled as a polynomial system in the ring F2k [x1, x2, · · · , xd], and computer algebrabased results (Hilbert's Nullstellensatz) over finite fields are exploited for verification. Using our approach, experiments are performed on a variety of custom-designed finite field arithmetic benchmark circuits. The results are also compared against contemporary methods, based on SAT and SMT solvers, BDDs, and AIG-based methods. Our tools can verify the correctness of, and detect bugs in, up to 163-bit circuits in F2163 , whereas contemporary approaches are infeasible beyond 48-bit circuits
Special section on advances in reachability analysis and decision procedures: contributions to abstraction-based system verification
Reachability analysis asks whether a system can evolve from legitimate initial states to unsafe states. It is thus a fundamental tool in the validation of computational systems - be they software, hardware, or a combination thereof. We recall a standard approach for reachability analysis, which captures the system in a transition system, forms another transition system as an over-approximation, and performs an incremental fixed-point computation on that over-approximation to determine whether unsafe states can be reached. We show this method to be sound for proving the absence of errors, and discuss its limitations for proving the presence of errors, as well as some means of addressing this limitation. We then sketch how program annotations for data integrity constraints and interface specifications - as in Bertrand Meyers paradigm of Design by Contract - can facilitate the validation of modular programs, e.g., by obtaining more precise verification conditions for software verification supported by automated theorem proving. Then we recap how the decision problem of satisfiability for formulae of logics with theories - e.g., bit-vector arithmetic - can be used to construct an over-approximating transition system for a program. Programs with data types comprised of bit-vectors of finite width require bespoke decision procedures for satisfiability. Finite-width data types challenge the reduction of that decision problem to one that off-the-shelf tools can solve effectively, e.g., SAT solvers for propositional logic. In that context, we recall the Tseitin encoding which converts formulae from that logic into conjunctive normal form - the standard format for most SAT solvers - with only linear blow-up in the size of the formula, but linear increase in the number of variables. Finally, we discuss the contributions that the three papers in this special section make in the areas that we sketched above. © Springer-Verlag 2009
- …