56 research outputs found

    Digital and analog TFET circuits: Design and benchmark

    Get PDF
    In this work, we investigate by means of simulations the performance of basic digital, analog, and mixed-signal circuits employing tunnel-FETs (TFETs). The analysis reviews and complements our previous papers on these topics. By considering the same devices for all the analysis, we are able to draw consistent conclusions for a wide variety of circuits. A virtual complementary TFET technology consisting of III-V heterojunction nanowires is considered. Technology Computer Aided Design (TCAD) models are calibrated against the results of advanced full-quantum simulation tools and then used to generate look-up-tables suited for circuit simulations. The virtual complementary TFET technology is benchmarked against predictive technology models (PTM) of complementary silicon FinFETs for the 10 nm node over a wide range of supply voltages (VDD) in the sub-threshold voltage domain considering the same footprint between the vertical TFETs and the lateral FinFETs and the same static power. In spite of the asymmetry between p- and n-type transistors, the results show clear advantages of TFET technology over FinFET for VDDlower than 0.4 V. Moreover, we highlight how differences in the I-V characteristics of FinFETs and TFETs suggest to adapt the circuit topologies used to implement basic digital and analog blocks with respect to the most common CMOS solutions

    Digital and analog TFET circuits: Design and benchmark

    Get PDF
    In this work, we investigate by means of simulations the performance of basic digital, analog, and mixed-signal circuits employing tunnel-FETs (TFETs). The analysis reviews and complements our previous papers on these topics. By considering the same devices for all the analysis, we are able to draw consistent conclusions for a wide variety of circuits. A virtual complementary TFET technology consisting of III-V heterojunction nanowires is considered. Technology Computer Aided Design (TCAD) models are calibrated against the results of advanced full-quantum simulation tools and then used to generate look-up-tables suited for circuit simulations. The virtual complementary TFET technology is benchmarked against predictive technology models (PTM) of complementary silicon FinFETs for the 10 nm node over a wide range of supply voltages (VDD) in the sub-threshold voltage domain considering the same footprint between the vertical TFETs and the lateral FinFETs and the same static power. In spite of the asymmetry between p- and n-type transistors, the results show clear advantages of TFET technology over FinFET for VDDlower than 0.4 V. Moreover, we highlight how differences in the I-V characteristics of FinFETs and TFETs suggest to adapt the circuit topologies used to implement basic digital and analog blocks with respect to the most common CMOS solutions

    Comparison of TFETs and CMOS using optimal design points for power-speed trade-offs

    Get PDF
    Tunnel transistors are one of the most attractive steep subthreshold slope devices currently being investigated as a means of overcoming the power density and energy inefficiency limitations of CMOS technology. In this paper, the evaluation and the comparison of the performance of distinct fan-in logic gates, using a set of widely accepted power-speed metrics, are addressed for five projected tunnel transistor (TFET) technologies and four mosfet and FinFET transistors. The impact of logic depth, switching activity, and minimum supply voltage has been also included in our analysis. Provided results suggest that benefits in terms of a certain metric, in which a higher weight is placed on power or delay, are strongly determined by the selected device. Particularly, the suitability of two of the explored TFET technologies to improve CMOS performance for different metrics is pointed out. A circuit level benchmark is evaluated to validate our analysis.Ministerio de EconomĂ­a y Competitividad TEC2013-40670-

    Steep-slope Devices for Power Efficient Adiabatic Logic Circuits

    Get PDF
    Reducing supply voltage is an effective way to reduce power consumption, however, it greatly reduces CMOS circuits speed. This translates in limitations on how low the supply voltage can be reduced in many applications due to frequency constraints. In particular, in the context of low voltage adiabatic circuits, another well-known technique to save power, it is not possible to obtain satisfactory power-speed trade-offs. Tunnel field-effect transistors (TFETs) have been shown to outperforms CMOS at low supply voltage in static logic implementations, operation due to their steep subthreshold slope (SS), and have potential for combining low voltage and adiabatic. To the best of our knowledge, the adiabatic circuit topologies reported with TFETs do not take into account the problems associated with their inverse current due to their intrinsic p-i-n diode. In this paper, we propose a solution to this problem, demonstrating that the proposed modification allows to significantly improving the performance in terms of power/energy savings compared to the original ones, especially at medium and low frequencies. In addition, we have evaluated the relative advantages of the proposed TFET adiabatic circuits, both at gate and architecture levels, with respect to their static implementations, demonstrating that these are greater than for FinFET transistor designs. Index Terms—Adiabatic logic, TunnelPeer reviewe

    NOVEL RESOURCE EFFICIENT CIRCUIT DESIGNS FOR REBOOTING COMPUTING

    Get PDF
    CMOS based computing is reaching its limits. To take computation beyond Moores law (the number of transistors and hence processing power on a chip doubles every 18 months to 3 years) requires research explorations in (i) new materials, devices, and processes, (ii) new architectures and algorithms, (iii) new paradigm of logic bit representation. The focus is on fundamental new ways to compute under the umbrella of rebooting computing such as spintronics, quantum computing, adiabatic and reversible computing. Therefore, this thesis highlights explicitly Quantum computing and Adiabatic logic, two new computing paradigms that come under the umbrella of rebooting computing. Quantum computing is investigated for its promising application in high-performance computing. The first contribution of this thesis is the design of two resource-efficient designs for quantum integer division. The first design is based on non-restoring division algorithm and the second one is based on restoring division algorithm. Both the designs are compared and shown to be superior to the existing work in terms of T-count and T-depth. The proliferation of IoT devices which work on low-power also has drawn interests to the rebooting computing. Hence, the second contribution of this thesis is proving that Adiabatic Logic is a promising candidate for implementation in IoT devices. The adiabatic logic family called Symmetric Pass Gate Adiabatic Logic (SPGAL) is implemented in PRESENT-80 lightweight algorithm. Adiabatic Logic is extended to emerging transistor devices

    Energy-Efficient In-Memory Architectures Leveraging Intrinsic Behaviors of Embedded MRAM Devices

    Get PDF
    For decades, innovations to surmount the processor versus memory gap and move beyond conventional von Neumann architectures continue to be sought and explored. Recent machine learning models still expend orders of magnitude more time and energy to access data in memory in addition to merely performing the computation itself. This phenomenon referred to as a memory-wall bottleneck, is addressed herein via a completely fresh perspective on logic and memory technology design. The specific solutions developed in this dissertation focus on utilizing intrinsic switching behaviors of embedded MRAM devices to design cross-layer and energy-efficient Compute-in-Memory (CiM) architectures, accelerate the computationally-intensive operations in various Artificial Neural Networks (ANNs), achieve higher density and reduce the power consumption as crucial requirements in future Internet of Things (IoT) devices. The first cross-layer platform developed herein is an Approximate Generative Adversarial Network (ApGAN) designed to accelerate the Generative Adversarial Networks from both algorithm and hardware implementation perspectives. In addition to binarizing the weights, further reduction in storage and computation resources is achieved by leveraging an in-memory addition scheme. Moreover, a memristor-based CiM accelerator for ApGAN is developed. The second design is a biologically-inspired memory architecture. The Short-Term Memory and Long-Term Memory features in biology are realized in hardware via a beyond-CMOS-based learning approach derived from the repeated input information and retrieval of the encoded data. The third cross-layer architecture is a programmable energy-efficient hardware implementation for Recurrent Neural Network with ultra-low power, area-efficient spin-based activation functions. A novel CiM architecture is proposed to leverage data-level parallelism during the evaluation phase. Specifically, we employ an MRAM-based Adjustable Probabilistic Activation Function (APAF) via a low-power tunable activation mechanism, providing adjustable accuracy levels to mimic ideal sigmoid and tanh thresholding along with a matching algorithm to regulate neuronal properties. Finally, the APAF design is utilized in the Long Short-Term Memory (LSTM) network to evaluate the network performance using binary and non-binary activation functions. The simulation results indicate up to 74.5 x 215; energy-efficiency, 35-fold speedup and ~11x area reduction compared with the similar baseline designs. These can form basis for future post-CMOS based non-Von Neumann architectures suitable for intermittently powered energy harvesting devices capable of pushing intelligence towards the edge of computing network

    Design of Resistive Synaptic Devices and Array Architectures for Neuromorphic Computing

    Get PDF
    abstract: Over the past few decades, the silicon complementary-metal-oxide-semiconductor (CMOS) technology has been greatly scaled down to achieve higher performance, density and lower power consumption. As the device dimension is approaching its fundamental physical limit, there is an increasing demand for exploration of emerging devices with distinct operating principles from conventional CMOS. In recent years, many efforts have been devoted in the research of next-generation emerging non-volatile memory (eNVM) technologies, such as resistive random access memory (RRAM) and phase change memory (PCM), to replace conventional digital memories (e.g. SRAM) for implementation of synapses in large-scale neuromorphic computing systems. Essentially being compact and “analog”, these eNVM devices in a crossbar array can compute vector-matrix multiplication in parallel, significantly speeding up the machine/deep learning algorithms. However, non-ideal eNVM device and array properties may hamper the learning accuracy. To quantify their impact, the sparse coding algorithm was used as a starting point, where the strategies to remedy the accuracy loss were proposed, and the circuit-level design trade-offs were also analyzed. At architecture level, the parallel “pseudo-crossbar” array to prevent the write disturbance issue was presented. The peripheral circuits to support various parallel array architectures were also designed. One key component is the read circuit that employs the principle of integrate-and-fire neuron model to convert the analog column current to digital output. However, the read circuit is not area-efficient, which was proposed to be replaced with a compact two-terminal oscillation neuron device that exhibits metal-insulator-transition phenomenon. To facilitate the design exploration, a circuit-level macro simulator “NeuroSim” was developed in C++ to estimate the area, latency, energy and leakage power of various neuromorphic architectures. NeuroSim provides a wide variety of design options at the circuit/device level. NeuroSim can be used alone or as a supporting module to provide circuit-level performance estimation in neural network algorithms. A 2-layer multilayer perceptron (MLP) simulator with integration of NeuroSim was demonstrated to evaluate both the learning accuracy and circuit-level performance metrics for the online learning and offline classification, as well as to study the impact of eNVM reliability issues such as data retention and write endurance on the learning performance.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    • …
    corecore