6 research outputs found

    Biconditional-BDD Ordering for Autosymmetric Functions

    Get PDF
    Autosymmetric functions are particular ``regular'' Boolean functions that are exploited for logic optimization, since it is possible to reduce the number of variables and the number of points of the original autosymmetric function before its synthesis. In this paper we study this regularity in oder to derive a suitable variable ordering for Biconditional Binary Decision Diagrams (BBDDs). BBDDs are a new version of BDD that have EXOR of two variables (instead of a variable) in the nodes. These diagrams are employed for logic synthesis in new technologies such as silicon nanowires and DG-SiNWFETs. We show that it is possible to find a useful variable ordering for these functions and the experimental results validate our approach showing that in the 97% of the cases we get an ordering that gives a number of nodes that is lower or equal to the one obtained with the standard ordering

    Synthesis of Linear Reversible Circuits and EXOR-AND-based Circuits for Incompletely Specified Multi-Output Functions

    Get PDF
    At this time the synthesis of reversible circuits for quantum computing is an active area of research. In the most restrictive quantum computing models there are no ancilla lines and the quantum cost, or latency, of performing a reversible form of the AND gate, or Toffoli gate, increases exponentially with the number of input variables. In contrast, the quantum cost of performing any combination of reversible EXOR gates, or CNOT gates, on n input variables requires at most O(n2/log2n) gates. It was under these conditions that EXOR-AND-EXOR, or EPOE, synthesis was developed. In this work, the GF(2) logic theory used in EPOE is expanded and the concept of an EXOR-AND product transform is introduced. Because of the generality of this logic theory, it is adapted to EXOR-AND-OR, or SPOE, synthesis. Three heuristic spectral logic synthesis algorithms are introduced, implemented in a program called XAX, and compared with previous work in classical logic circuits of up to 26 inputs. Three linear reversible circuit methods are also introduced and compared with previous work in linear reversible logic circuits of up to 100 inputs

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied

    Exploiting regularities for boolean function synthesis

    No full text
    The "regularity" of a Boolean function can be exploited for decreasing its minimization time. It has already been shown that the notion of autosymmetry is a valid measure of regularity, however such a notion has been studied thus far either in the theoretical framework of self-dual Boolean functions, or for the synthesis of a particular family of three-level logic networks. In this paper we show that the degree of autosymmetry of an arbitrary function can be computed implicitly in a very efficient way, and autosymmetry can then be exploited in any logic minimization context. Our algorithms make crucial use of Binary Decision Diagrams. A set of experimental results on the synthesis of standard benchmark functions substantiates the practical relevance of our theoretical results

    Exploiting Regularities for Boolean Function Synthesis

    No full text
    The "regularity" of a Boolean function can be exploited for decreasing its minimization time. It has already been shown that the notion of autosymmetry is a valid measure of regularity however such a notion has been studied thus far either in the theoretical framework of self-dual Boolean functions or for the synthesis of a particular family of three-level logic networks. In this paper we show that the degree of autosymmetry of an arbitrary function can be computed implicitly in a very efficient way and autosymmetry can then be exploited in any logic minimization context. Our algorithms make crucial use of Binary Decision Diagrams. A set of experimental results on the synthesis of standard benchmark functions substantiates the practical relevance of our theoretical results

    Exploiting Regularities for Boolean Function Synthesis

    No full text
    corecore