189 research outputs found
Synthesis and Optimization of Reversible Circuits - A Survey
Reversible logic circuits have been historically motivated by theoretical
research in low-power electronics as well as practical improvement of
bit-manipulation transforms in cryptography and computer graphics. Recently,
reversible circuits have attracted interest as components of quantum
algorithms, as well as in photonic and nano-computing technologies where some
switching devices offer no signal gain. Research in generating reversible logic
distinguishes between circuit synthesis, post-synthesis optimization, and
technology mapping. In this survey, we review algorithmic paradigms ---
search-based, cycle-based, transformation-based, and BDD-based --- as well as
specific algorithms for reversible synthesis, both exact and heuristic. We
conclude the survey by outlining key open challenges in synthesis of reversible
and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table
NANOCONTROLLER PROGRAM OPTIMIZATION USING ITE DAGS
Kentucky Architecture nanocontrollers employ a bit-serial SIMD-parallel hardware design to execute MIMD control programs. A MIMD program is transformed into equivalent SIMD code by a process called Meta-State Conversion (MSC), which makes heavy use of enable masking to distinguish which code should be executed by each processing element. Both the bit-serial operations and the enable masking imposed on them are expressed in terms of if-then-else (ITE) operations implemented by a 1-of-2 multiplexor, greatly simplifying the hardware. However, it takes a lot of ITEs to implement even a small program fragment. Traditionally, bit-serial SIMD machines had been programmed by expanding a fixed bitserial pattern for each word-level operation. Instead, nanocontrollers can make use of the fact that ITEs are equivalent to the operations in Binary Decision Diagrams (BDDs), and can apply BDD analysis to optimize the ITEs. This thesis proposes and experimentally evaluates a number of techniques for minimizing the complexity of the BDDs, primarily by manipulating normalization ordering constraints. The best method found is a new approach in which a simple set of optimization transformations is followed by normalization using an ordering determined by a Genetic Algorithm (GA)
Recommended from our members
Function Verification of Combinational Arithmetic Circuits
Hardware design verification is the most challenging part in overall hardware design process. It is because design size and complexity are growing very fast while the requirement for performance is ever higher. Conventional simulation-based verification method cannot keep up with the rapid increase in the design size, since it is impossible to exhaustively test all input vectors of a complex design. An important part of hardware verification is combinational arithmetic circuit verification. It draws a lot of attention because flattening the design into bit-level, known as the bit-blasting problem, hinders the efficiency of many current formal techniques. The goal of this thesis is to introduce a robust and efficient formal verification method for combinational integer arithmetic circuit based on an in-depth analysis of recent advances in computer algebra. The method proposed here solves the verification problem at bit level, while avoiding bit-blasting problem. It also avoids the expensive Groebner basis computation, typically employed by symbolic computer algebra methods. The proposed method verifies the gate-level implementation of the design by representing the design components (logic gates and arithmetic modules) by polynomials in Z2n . It then transforms the polynomial representing the output bits (called “output signature”) into a unique polynomial in input signals (called “input signature”) using gate-level information of the design. The computed input signature is then compared with the reference input signature (golden model) to determine whether the circuit behaves as anticipated. If the reference input signature is not given, our method can be used to compute (or extract) the arithmetic function of the design by computing its input signature. Additional tools, based on canonical word-level design representations (such as TED or BMD) can be used to determine the function of the computed input signature represents. We demonstrate the applicability of the proposed method to arithmetic circuit verification on a large number of designs
Fault Tree Analysis: a survey of the state-of-the-art in modeling, analysis and tools
Fault tree analysis (FTA) is a very prominent method to analyze the risks related to safety and economically critical assets, like power plants, airplanes, data centers and web shops. FTA methods comprise of a wide variety of modelling and analysis techniques, supported by a wide range of software tools. This paper surveys over 150 papers on fault tree analysis, providing an in-depth overview of the state-of-the-art in FTA. Concretely, we review standard fault trees, as well as extensions such as dynamic FT, repairable FT, and extended FT. For these models, we review both qualitative analysis methods, like cut sets and common cause failures, and quantitative techniques, including a wide variety of stochastic methods to compute failure probabilities. Numerous examples illustrate the various approaches, and tables present a quick overview of results
Extended Fault Trees Analysis supported by Stochastic Petri Nets
This work presents several extensions to the Fault Tree [90] formalism used to build models oriented to the Dependability [103] analysis of systems. In this way, we increment the modelling capacity of Fault Trees which turn from simple combinatorial models to an high level language to represent more complicated aspects of the behaviour and of the failure mode of systems. Together with the extensions to the Fault Tree formalism, this work proposes solution methods for extended Fault
Trees in order to cope with the new modelling facilities. These methods are mainly based on the use of Stochastic Petri Nets. Some of the formalisms described in this work are already present in the literature;
for them we propose alternative solution methods with respect to the existing ones. Other formalisms are instead part of the original contribution of this work
Doctor of Philosophy
dissertationWith the spread of internet and mobile devices, transferring information safely and securely has become more important than ever. Finite fields have widespread applications in such domains, such as in cryptography, error correction codes, among many others. In most finite field applications, the field size - and therefore the bit-width of the operands - can be very large. The high complexity of arithmetic operations over such large fields requires circuits to be (semi-) custom designed. This raises the potential for errors/bugs in the implementation, which can be maliciously exploited and can compromise the security of such systems. Formal verification of finite field arithmetic circuits has therefore become an imperative. This dissertation targets the problem of formal verification of hardware implementations of combinational arithmetic circuits over finite fields of the type F2k . Two specific problems are addressed: i) verifying the correctness of a custom-designed arithmetic circuit implementation against a given word-level polynomial specification over F2k ; and ii) gate-level equivalence checking of two different arithmetic circuit implementations. This dissertation proposes polynomial abstractions over finite fields to model and represent the circuit constraints. Subsequently, decision procedures based on modern computer algebra techniques - notably, Gr¨obner bases-related theory and technology - are engineered to solve the verification problem efficiently. The arithmetic circuit is modeled as a polynomial system in the ring F2k [x1, x2, · · · , xd], and computer algebrabased results (Hilbert's Nullstellensatz) over finite fields are exploited for verification. Using our approach, experiments are performed on a variety of custom-designed finite field arithmetic benchmark circuits. The results are also compared against contemporary methods, based on SAT and SMT solvers, BDDs, and AIG-based methods. Our tools can verify the correctness of, and detect bugs in, up to 163-bit circuits in F2163 , whereas contemporary approaches are infeasible beyond 48-bit circuits
Safety system design optimisation
This thesis investigates the efficiency of a design optimisation scheme that is
appropriate for systems which require a high likelihood of functioning on demand.
Traditional approaches to the design of safety critical systems follow the preliminary
design, analysis, appraisal and redesign stages until what is regarded as an acceptable
design is achieved. For safety systems whose failure could result in loss of life it is
imperative that the best use of the available resources is made and a system which is
optimal, not just adequate, is produced.
The object of the design optimisation problem is to minimise system unavailability
through manipulation of the design variables, such that limitations placed on them by
constraints are not violated.
Commonly, with mathematical optimisation problem; there will be an explicit
objective function which defines how the characteristic to be minimised is related to
the variables. As regards the safety system problem, an explicit objective function
cannot be formulated, and as such, system performance is assessed using the fault tree
method. By the use of house events a single fault tree is constructed to represent the
failure causes of each potential design to overcome the time consuming task of
constructing a fault tree for each design investigated during the optimisation
procedure. Once the fault tree has been constructed for the design in question it is
converted to a BDD for analysis.
A genetic algorithm is first employed to perform the system optimisation, where the
practicality of this approach is demonstrated initially through application to a High-Integrity
Protection System (HIPS) and subsequently a more complex Firewater
Deluge System (FDS).
An alternative optimisation scheme achieves the final design specification by solving
a sequence of optimisation problems. Each of these problems are defined by
assuming some form of the objective function and specifying a sub-region of the
design space over which this function will be representative of the system
unavailability.
The thesis concludes with attention to various optimisation techniques, which possess
features able to address difficulties in the optimisation of safety critical systems.
Specifically, consideration is given to the use of a statistically designed experiment
and a logical search approach
- …