66 research outputs found

    New Data Structures and Algorithms for Logic Synthesis and Verification

    Get PDF
    The strong interaction between Electronic Design Automation (EDA) tools and Complementary Metal-Oxide Semiconductor (CMOS) technology contributed substantially to the advancement of modern digital electronics. The continuous downscaling of CMOS Field Effect Transistor (FET) dimensions enabled the semiconductor industry to fabricate digital systems with higher circuit density at reduced costs. To keep pace with technology, EDA tools are challenged to handle both digital designs with growing functionality and device models of increasing complexity. Nevertheless, whereas the downscaling of CMOS technology is requiring more complex physical design models, the logic abstraction of a transistor as a switch has not changed even with the introduction of 3D FinFET technology. As a consequence, modern EDA tools are fine tuned for CMOS technology and the underlying design methodologies are based on CMOS logic primitives, i.e., negative unate logic functions. While it is clear that CMOS logic primitives will be the ultimate building blocks for digital systems in the next ten years, no evidence is provided that CMOS logic primitives are also the optimal basis for EDA software. In EDA, the efficiency of methods and tools is measured by different metrics such as (i) the result quality, for example the performance of a digital circuit, (ii) the runtime and (iii) the memory footprint on the host computer. With the aim to optimize these metrics, the accordance to a specific logic model is no longer important. Indeed, the key to the success of an EDA technique is the expressive power of the logic primitives handling and solving the problem, which determines the capability to reach better metrics. In this thesis, we investigate new logic primitives for electronic design automation tools. We improve the efficiency of logic representation, manipulation and optimization tasks by taking advantage of majority and biconditional logic primitives. We develop synthesis tools exploiting the majority and biconditional expressiveness. Our tools show strong results as compared to state-of-the-art academic and commercial synthesis tools. Indeed, we produce the best results for several public benchmarks. On top of the enhanced synthesis power, our methods are the natural and native logic abstraction for circuit design in emerging nanotechnologies, where majority and biconditional logic are the primitive gates for physical implementation. We accelerate formal methods by (i) studying properties of logic circuits and (ii) developing new frameworks for logic reasoning engines. We prove non-trivial dualities for the property checking problem in logic circuits. Our findings enable sensible speed-ups in solving circuit satisfiability. We develop an alternative Boolean satisfiability framework based on majority functions. We prove that the general problem is still intractable but we show practical restrictions that can be solved efficiently. Finally, we focus on reversible logic where we propose a new equivalence checking approach. We exploit the invertibility of computation and the functionality of reversible gates in the formulation of the problem. This enables one order of magnitude speed up, as compared to the state-of-the-art solution. We argue that new approaches to solve EDA problems are necessary, as we have reached a point of technology where keeping pace with design goals is tougher than ever

    New Logic Synthesis As Nanotechnology Enabler (invited paper)

    Get PDF
    Nanoelectronics comprises a variety of devices whose electrical properties are more complex as compared to CMOS, thus enabling new computational paradigms. The potentially large space for innovation has to be explored in the search for technologies that can support large-scale and high- performance circuit design. Within this space, we analyze a set of emerging technologies characterized by a similar computational abstraction at the design level, i.e., a binary comparator or a majority voter. We demonstrate that new logic synthesis techniques, natively supporting this abstraction, are the technology enablers. We describe models and data-structures for logic design using emerging technologies and we show results of applying new synthesis algorithms and tools. We conclude that new logic synthesis methods are required to both evaluate emerging technologies and to achieve the best results in terms of area, power and performance

    Exploiting Satisfiability Solvers for Efficient Logic Synthesis

    Get PDF
    Logic synthesis is an important part of electronic design automation (EDA) flows, which enable the implementation of digital systems. As the design size and complexity increase, the data structures and algorithms for logic synthesis must adapt and improve in order to keep pace and to maintain acceptable runtime and high-quality results. Large circuits were often represented using binary decision diagrams (BDDs) that were rapidly adopted by logic synthesis tools beginning in the 1980s. Nowadays, BDD-based algorithms are still enhanced, but the possibilities for improvement are somewhat saturated after some 35 years of research. Alternatively, the first EDA applications that exploit Boolean satisfiability (SAT) were developed in the 1990s. Despite the worst-case exponential runtime of SAT solvers, rapid progress in their performance enabled the creation of efficient SAT-based algorithms. Yet, logic synthesis started using SAT solvers more diffusely only in the last decade. Therefore, thorough research is still required both for exploiting SAT solvers and for encoding logic synthesis problems into SAT. Our main goal in this thesis is to facilitate and promote the further integration of SAT solvers into EDA by proposing and evaluating novel SAT-based algorithms that can be used as building blocks in logic synthesis tools. First, we propose a rapid algorithm for LEXSAT, which generates satisfying assignments in lexicographic order. We show that LEXSAT can bring canonicity, which guarantees the generation of unique results, when using SAT solvers in EDA applications. Next, we present a new SAT-based algorithm that progressively generates irredundant sums of products (SOPs), which still play a crucial role in many logic synthesis tools. Using LEXSAT, for the first time, we can generate canonical SAT-based SOPs that, much like BDD-based SOPs, are unique for a given function and variable order but could relax canonicity in order to improve speed and scalability. Unlike BDDs, due to its progressive nature, our algorithm can generate partial SOPs for applications that can work with incomplete circuit functionality. It is noteworthy that both LEXSAT and the SAT-based SOPs are applicable beyond logic synthesis and EDA. Finally, we focus on resubstitution, which reimplements a given Boolean function as a new function that depends on a set of existing functions called divisors. We propose the carving interpolation algorithm that, unlike the traditional Craig interpolation, forces the use of a specific divisor as an input of the new function. This is particularly useful for global circuit restructuring and for some synthesis-based engineering change order (ECO) algorithms. Furthermore, we compare two existing SAT-based methodologies for resubstitution, which are used for post-mapping logic optimisation. The first methodology combines SAT-based functional dependency checking and Craig interpolation that are also used for our carving interpolation; the second methodology is based on cube enumeration and is similar to the SAT-based SOP generation. The initial implementations of our novel SAT-based algorithms offer either better performance or new features, or both, compared to their state-of-the-art versions. As the results indicate, a further thorough development of SAT-based algorithms for logic synthesis, like the one performed for BDDs in the past, can help overcome existing limitations and keep up with growing designs and design complexity

    Polarity Control at Runtime:from Circuit Concept to Device Fabrication

    Get PDF
    Semiconductor device research for digital circuit design is currently facing increasing challenges to enhance miniaturization and performance. A huge economic push and the interest in novel applications are stimulating the development of new pathways to overcome physical limitations affecting conventional CMOS technology. Here, we propose a novel Schottky barrier device concept based on electrostatic polarity control. Specifically, this device can behave as p- or n-type by simply changing an electric input bias. This device combines More-than-Moore and Beyond CMOS elements to create an efficient technology with a viable path to Very Large Scale Integration (VLSI). This thesis proposes a device/circuit/architecture co-optimization methodology, where aspects of device technology to logic circuit and system design are considered. At device level, a full CMOS compatible fabrication process is presented. In particular, devices are demonstrated using vertically stacked, top-down fabricated silicon nanowires with gate-all-around electrode geometry. Source and drain contacts are implemented using nickel silicide to provide quasi-symmetric conduction of either electrons or holes, depending on the mode of operation. Electrical measurements confirm excellent performance, showing Ion/Ioff > 10^7 and subthreshold slopes approaching the thermal limit, SS ~ 60mV/dec (~ 63mV/dec) for n(p)-type operation in the same physical device. Moreover, the shown devices behave as p-type for a polarization bias (polarity gate voltage, Vpg) of 0V, and n-type for a Vpg = 1V, confirming their compatibility with multi-level static logic circuit design. At logic gate level, two- and four-transistor logic gates are fabricated and tested. In particular, the first fully functional, two-transistor XOR logic gate is demonstrated through electrical characterization, confirming that polarity control can enable more compact logic gate design with respect to conventional CMOS. Furthermore, we show for the first time fabricated four- transistors logic gates that can be reconfigured as NAND or XOR only depending on their external connectivity. In this case, logic gates with full swing output range are experimentally demonstrated. Finally, single device and mixed-mode TCAD simulation results show that lower Vth and more optimized polarization ranges can be expected in scaled devices implementing strain or high-k technologies. At circuit and system level, a full semi-custom logic circuit design tool flow was defined and configured. Using this flow, novel logic libraries based on standard cells or regular gate fabrics were compared with standard CMOS. In this respect, results were shown in comparison to CMOS, including a 40% normalized area-delay product reduction for the analyzed standard cell libraries, and improvements of over 2× in terms of normalized delay for regular Controlled Polarity (CP)-based cells in the context of Structured ASICs. These results, in turn, confirm the interest in further developing and optimizing CP devices, as promising candidates for future digital circuit technology

    Pravda a význam: dialektika teorie a praxe

    Get PDF
    Tarski's semantic conception of truth is arguably the most influential - certainly, most discussed - modern conception of truth. It has provoked many different interpretations and reactions, some thinkers celebrating it for successfully explicating the notion of truth, whereas others have argued that it is no good as a philosophical account of truth. The aim of the thesis is to offer a systematic and critical investigation of its nature and significance, based on the thorough explanation of its conceptual, technical as well as historical underpinnings. The methodological strategy adopted in the thesis reflects the author's belief that in order to evaluate the import of Tarski's conception we need to understand what logical, mathematical and philosophical aspects it has, what role they play in his project of theoretical semantics, which of them hang in together, and which should be kept separate. Chapter 2 therefore starts with a detailed exposition of the conceptual and historical background of Tarski's semantic conception of truth and his method of truth definition for formalized languages, situating it within his project of theoretical semantics, and Chapter 3 explains the formal machinery of Tarski's truth definitions for increasingly more complex languages. Chapters 4-7 form the core of the...Tarského sémantická koncepce pravdy je patrně nevlivnější - určitě nejdiskutovanější - moderní koncepce pravdy, která vzbudila nespočet různých interpretací a reakcí. Zatímco někteří filosofové ji oslavovali jako úspěšnou explikaci pojmu pravdy, jiní argumentovali, že nám neposkytuje adekvátní filosofický výklad tohoto pojmu. Cílem dizertace je podat systematické a kritické prozkoumání povahy a signifikance Tarského koncepce, založené na pečlivé expozici jejich konceptuálních, technických I historických předpokladů. Metodologická strategie aplikována v práci obráží autorovo přesvědčení, že nelze patřičně zhodnotit přínos Tarského koncepce bez pochopení jejich logických, matematických a filosofických aspektů, a toho jakou roli hraji v jeho širším projektu teoretické sémantiky, jak spolu souvisí (případně nesouvisí). Kapitola 2 je detailní expozicí konceptuálního i historického pozadí Tarského koncepce pravdy a metody definovaní pojmu pravdy pro formalizované jazyky, a v kapitole 3 se vysvětluje formální aparát pravdivostních definicí pro 3 typy jazyků různé komplexity. Kapitoly 4-7, které tvoří jádro celé práce, jsou věnovány ústřední otázce signifikance Tarského koncepce. V kapitole 4 se vysvětlují její logicko-matematické aspekty a přínos pro matematickou logiku, v souvislosti s výsledky Kurta...Institute of Philosophy and Religious StudiesÚstav filosofie a religionistikyFilozofická fakultaFaculty of Art

    Towards More Efficient Logic Blocks By Exploiting Biconditional Expansion (Abstract Only)

    No full text

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning

    Context-based multimedia semantics modelling and representation

    Get PDF
    The evolution of the World Wide Web, increase in processing power, and more network bandwidth have contributed to the proliferation of digital multimedia data. Since multimedia data has become a critical resource in many organisations, there is an increasing need to gain efficient access to data, in order to share, extract knowledge, and ultimately use the knowledge to inform business decisions. Existing methods for multimedia semantic understanding are limited to the computable low-level features; which raises the question of how to identify and represent the high-level semantic knowledge in multimedia resources.In order to bridge the semantic gap between multimedia low-level features and high-level human perception, this thesis seeks to identify the possible contextual dimensions in multimedia resources to help in semantic understanding and organisation. This thesis investigates the use of contextual knowledge to organise and represent the semantics of multimedia data aimed at efficient and effective multimedia content-based semantic retrieval.A mixed methods research approach incorporating both Design Science Research and Formal Methods for investigation and evaluation was adopted. A critical review of current approaches for multimedia semantic retrieval was undertaken and various shortcomings identified. The objectives for a solution were defined which led to the design, development, and formalisation of a context-based model for multimedia semantic understanding and organisation. The model relies on the identification of different contextual dimensions in multimedia resources to aggregate meaning and facilitate semantic representation, knowledge sharing and reuse. A prototype system for multimedia annotation, CONMAN was built to demonstrate aspects of the model and validate the research hypothesis, H₁.Towards providing richer and clearer semantic representation of multimedia content, the original contributions of this thesis to Information Science include: (a) a novel framework and formalised model for organising and representing the semantics of heterogeneous visual data; and (b) a novel S-Space model that is aimed at visual information semantic organisation and discovery, and forms the foundations for automatic video semantic understanding

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied

    Circuit Design, Architecture and CAD for RRAM-based FPGAs

    Get PDF
    Field Programmable Gate Arrays (FPGAs) have been indispensable components of embedded systems and datacenter infrastructures. However, energy efficiency of FPGAs has become a hard barrier preventing their expansion to more application contexts, due to two physical limitations: (1) The massive usage of routing multiplexers causes delay and power overheads as compared to ASICs. To reduce their power consumption, FPGAs have to operate at low supply voltage but sacrifice performance because the transistors drive degrade when working voltage decreases. (2) Using volatile memory technology forces FPGAs to lose configurations when powered off and to be reconfigured at each power on. Resistive Random Access Memories (RRAMs) have strong potentials in overcoming the physical limitations of conventional FPGAs. First of all, RRAMs grant FPGAs non-volatility, enabling FPGAs to be "Normally powered off, Instantly powered on". Second, by combining functionality of memory and pass-gate logic in one unique device, RRAMs can greatly reduce area and delay of routing elements. Third, when RRAMs are embedded into datpaths, the performance of circuits can be independent from their working voltage, beyond the limitations of CMOS circuits. However, researches and development of RRAM-based FPGAs are in their infancy. Most of area and performance predictions were achieved without solid circuit-level simulations and sophisticated Computer Aided Design (CAD) tools, causing the predicted improvements to be less convincing. In this thesis,we present high-performance and low-power RRAM-based FPGAs fromtransistorlevel circuit designs to architecture-level optimizations and CAD tools, using theoretical analysis, industrial electrical simulators and novel CAD tools. We believe that this is the first systematic study in the field, covering: From a circuit design perspective, we propose efficient RRAM-based programming circuits and routing multiplexers through both theoretical analysis and electrical simulations. The proposed 4T(ransitor)1R(RAM) programming structure demonstrates significant improvements in programming current, when compared to most popular 2T1R programming structure. 4T1R-based routingmultiplexer designs are proposed by considering various physical design parasitics, such as intrinsic capacitance of RRAMs and wells doping organization. The proposed 4T1R-based multiplexers outperformbest CMOS implementations significantly in area, delay and power at both nominal and near-Vt regime. From a CAD perspective, we develop a generic FPGA architecture exploration tool, FPGASPICE, modeling a full FPGA fabric with SPICE and Verilog netlists. FPGA-SPICE provides different levels of testbenches and techniques to split large SPICE netlists, in order to obtain better trade-off between simulation time and accuracy. FPGA-SPICE can capture area and power characteristics of SRAM-based and RRAM-based FPGAs more accurately than the currently best analyticalmodels. From an architecture perspective, we propose architecture-level optimizations for RRAMbased FPGAs and quantify their minimumrequirements for RRAM devices. Compared to the best SRAM-based FPGAs, an optimized RRAM-based FPGA architecture brings significant reduction in area, delay and power respectively. In particular, RRAM-based FPGAs operating in the near-Vt regime demonstrate a 5x power improvement without delay overhead as compared to optimized SRAM-based FPGA operating at nominal working voltage
    corecore