9 research outputs found

    Biconditional-BDD Ordering for Autosymmetric Functions

    Get PDF
    Autosymmetric functions are particular ``regular'' Boolean functions that are exploited for logic optimization, since it is possible to reduce the number of variables and the number of points of the original autosymmetric function before its synthesis. In this paper we study this regularity in oder to derive a suitable variable ordering for Biconditional Binary Decision Diagrams (BBDDs). BBDDs are a new version of BDD that have EXOR of two variables (instead of a variable) in the nodes. These diagrams are employed for logic synthesis in new technologies such as silicon nanowires and DG-SiNWFETs. We show that it is possible to find a useful variable ordering for these functions and the experimental results validate our approach showing that in the 97% of the cases we get an ordering that gives a number of nodes that is lower or equal to the one obtained with the standard ordering

    Synthesis of Linear Reversible Circuits and EXOR-AND-based Circuits for Incompletely Specified Multi-Output Functions

    Get PDF
    At this time the synthesis of reversible circuits for quantum computing is an active area of research. In the most restrictive quantum computing models there are no ancilla lines and the quantum cost, or latency, of performing a reversible form of the AND gate, or Toffoli gate, increases exponentially with the number of input variables. In contrast, the quantum cost of performing any combination of reversible EXOR gates, or CNOT gates, on n input variables requires at most O(n2/log2n) gates. It was under these conditions that EXOR-AND-EXOR, or EPOE, synthesis was developed. In this work, the GF(2) logic theory used in EPOE is expanded and the concept of an EXOR-AND product transform is introduced. Because of the generality of this logic theory, it is adapted to EXOR-AND-OR, or SPOE, synthesis. Three heuristic spectral logic synthesis algorithms are introduced, implemented in a program called XAX, and compared with previous work in classical logic circuits of up to 26 inputs. Three linear reversible circuit methods are also introduced and compared with previous work in linear reversible logic circuits of up to 100 inputs

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied

    TOWARD LOWER COMMUNICATION IN GARBLED CIRCUIT EVALUATION

    Get PDF
    Secure Multi-party Computation (SMC) is a classical problem in theoretical security. In a SMC problem, two or more parties must compute correctly a function f on their respective inputs x and y, while preserving the privacy of their inputs and additional security properties. One of the approaches proposed for addressing the SMC problem relies on the design of Garbled Circuit (GC). In Garbled Circuits (GCs), the function to be computed is represented as a Boolean circuit composed of binary gates. The input and output wire of each gate is masked such that the party evaluating the Garbled Boolean Circuits (GBC) cannot gain any information about the inputs or the intermediate results that appear during the function evaluation. The complexity of today's most efficient GC protocol depends linearly on the size of the Boolean circuit representation of the evaluated function. The total cost and run-time interaction between parties increase linearly with the number of gates and can be huge for complex GBCs. Actually, interest has grown in the efficiency of this technique and in its applications to computation outsourcing in untrusted environments. A recent work shows that XOR gates in a Boolean circuit have no cost for the secure computation protocol. Therefore, circuits with a reduced number of non-XOR gates are more convenient and one of the possible ways to reduce the complexity of the computation is to reduce the number of non-XOR gates in the Boolean circuit. Recalling that, the main aim of this work is to reduce the number of non-XOR gates, which directly results in a reduced number of interactions between the parties and transfer complexity at runtime, we present different approaches for reducing the communication cost of Secure Multi-party Computation (SMC) and improving the overall computation time and efficiency of the execution of SMC

    Synthesis of Autosymmetric Functions in a New Three-Level Form

    No full text
    Autosymmetric functions exhibit a special type of regularity that can speed-up the minimization process. Based on this autosymmetry, we propose a three level form of logic synthesis, called ORAX (EXOR-AND-OR), to be compared with the standard minimal SOP (Sum of Products) form. First we provide a fast ORAX minimization algorithm for autosymmetric functions. The ORAX network for a function f has a first level of at most 2(n−k) EXOR gates, followed by the AND-OR levels, where n is the number of input variables and k is the “autosymmetry degree” of f. In general a minimal ORAX form has smaller size than a standard minimal SOP form for the same function. We show how the gain in area of ORAX over SOP can be measured without explicitly generating the latter. If preferred, a SOP expression can be directly derived from the corresponding ORAX. A set of experimental results confirms that the ORAX form is generally more compact than the SOP form, and its synthesis is much faster than classical three-level logic minimization. Indeed ORAX and SOP minimization times are often comparable, and in some cases ORAX synthesis is even faster

    XOR-AND-XOR Logic Forms for Autosymmetric Functions and Applications to Quantum Computing

    No full text
    We propose a new three-level XOR-AND-XOR form for autosymmetric functions, called XORAX expression. In general, a Boolean function f over n variables is k-autosymmetric if it can be projected onto a smaller function fk, which depends on n-k variables only. We show that XORAX expressions can ease the reversible synthesis of autosymmetric functions, producing compact reversible networks, without inserting additional new input lines. Autosymmetry occurs especially for functions that exhibit a regular structure, as for instance arithmetic functions. For this reason, compact reversible networks for autosymmetric functions might be interesting for quantum computing. Experimental results validate the proposed approach

    Three-level logic minimization based on function regularities

    No full text
    We exploit the "regularity" of Boolean functions with the purpose of decreasing the time for constructing minimal three-level expressions, in the sum of pseudoproducts (SPP) form recently developed. The regularity of a Boolean function f of n variables can be expressed by an autosymmetry degree k (with 0 64 k 64 n). k = 0 means no regularity, that is we are not able to provide any advantage over standard synthesis. For k 65 1 the function f is said to be autosymmetric, and a new function fk depending on n - k variables only, called the restriction of f, is identified in time polynomial in the number of points of f. The relation between f and fk is discussed in depth to show how a minimal SPP form for f can be build in linear time from a minimal SPP form for fk. The concept of autosymmetry is then extended to functions with don't care conditions, and the SPP minimization technique is duly extended to such functions. A large set of experimental results is presented, showing that 61% of the outputs for the functions in the classical ESPRESSO benchmark suite are autosymmetric. The minimization time for such functions is critically reduced, and cases otherwise intractable are solved. The quality of the corresponding circuits, measured with some well established cost functions, is also improved. Finally, we discuss the role and meaning of autosymmetric functions, and why a great amount of functions of practical interest fall in this class

    Fast Three-Level Logic Minimization Based on Autosymmetry

    No full text
    In the framework of SPP minimization, a three level logic synthesis technique developed in recent years, we exploit the \regularity" of Boolean functions to decrease minimization time. Our main results are: 1) the regularity of a Boolean function f of n variables is expressed by its autosymmetry degree k (with 0 k n), where k = 0 means no regularity (that is, we are not able to provide any advantage over standard synthesis); 2) for k 1 the function is autosymmetric, and a new function f k is identied in polynomial time; f k is \equivalent" to, but smaller than f , and depends on n k variables only; 3) given a minimal SPP form for f k , a minimal SPP form for f is built in linear time; 4) experimental results show that 61% of the functions in the classical Espresso benchmark suite are autosymmetric, and the SPP minimization time for them is critically reduced; we can also solve cases otherwise practically intractable. We nally discuss the role and meaning of autosymmetry
    corecore