38 research outputs found

    Leakage Power Reduction Techniques in Deep Submicron Technologies for VLSI Applications

    Get PDF
    AbstractThe leakage power dissipation has become one of the most challenging issues in low power VLSI circuit designs especially with on-chip devices as it doubles for every two years[4]-[5]. The scaling down of threshold voltage has contributed enormously towards increase in subthreshold leakage current thereby making the static (leakage) power dissipation very high. According to International Technology Roadmap for Semiconductors (ITRS), the total power dissipation may be significantly contributed by leakage power dissipation [1]. The battery operated devices with long duration in standby mode may be drained out very quickly due to the leakage power. In CMOS submicron technologies, leakage power dissipation plays a significant role. However, various low power design techniques for efficient minimization of leakage power are proposed in the literature review. A comprehensive study and analysis of various leakage power minimization techniques have been presented in this paper. The present research study and its corresponding analysis are mainly focusing on circuit performance parameters. It is implied from the current literature that only an appropriate choice of leakage power minimization technique for a specific application can be effectively carried by a VLSI circuit designer based on sequential analytical approach

    Addressing On-Chip Power Conversion and Dissipation Issues in Many-Core System-on-a-Chip based on Conventional Silicon and Emerging Nanotechnologies

    Get PDF
    Title from PDF of title page viewed August 27, 2018Dissertation advisor: Masud H ChowdhuryVitaIncludes bibliographical references (pages 158-163)Thesis (Ph.D.)--School of Computing and Engineering and Department of Physics and Astronomy. University of Missouri--Kansas City, 2017Integrated circuits (ICs) are moving towards system-on-a-chip (SOC) designs. SOC allows various small and large electronic systems to be implemented in a single chip. This approach enables the miniaturization of design blocks that leads to high density transistor integration, faster response time, and lower fabrication costs. To reap the benefits of SOC and uphold the miniaturization of transistors, innovative power delivery and power dissipation management schemes are paramount. This dissertation focuses on on-chip integration of power delivery systems and managing power dissipation to increase the lifetime of energy storage elements. We explore this problem from two different angels: On-chip voltage regulators and power gating techniques. On-chip voltage regulators reduce parasitic effects, and allow faster and efficient power delivery for microprocessors. Power gating techniques, on the other hand, reduce the power loss incurred by circuit blocks during standby mode. Power dissipation (Ptotal = Pstatic and Pdynamic) in a complementary metal-oxide semiconductor (CMOS) circuit comes from two sources: static and dynamic. A quadratic dependency on the dynamic switching power and a more than linear dependency on static power as a form of gate leakage (subthreshold current) exist. To reduce dynamic power loss, the supply power should be reduced. A significant reduction in power dissipation occurs when portions of a microprocessor operate at a lower voltage level. This reduction in supply voltage is achieved via voltage regulators or converters. Voltage regulators are used to provide a stable power supply to the microprocessor. The conventional off-chip switching voltage regulator contains a passive floating inductor, which is difficult to be implemented inside the chip due to excessive power dissipation and parasitic effects. Additionally, the inductor takes a very large chip area while hampering the scaling process. These limitations make passive inductor based on-chip regulator design very unattractive for SOC integration and multi-/many-core environments. To circumvent the challenges, three alternative techniques based on active circuit elements to replace the passive LC filter of the buck convertor are developed. The first inductorless on-chip switching voltage regulator architecture is based on a cascaded 2nd order multiple feedback (MFB) low-pass filter (LPF). This design has the ability to modulate to multiple voltage settings via pulse with modulation (PWM). The second approach is a supplementary design utilizing a hybrid low drop-out scheme to lower the output ripple of the switching regulator over a wider frequency range. The third design approach allows the integration of an entire power management system within a single chipset by combining a highly efficient switching regulator with an intermittently efficient linear regulator (area efficient), for robust and highly efficient on-chip regulation. The static power (Pstatic) or subthreshold leakage power (Pleak) increases with technology scaling. To mitigate static power dissipation, power gating techniques are implemented. Power gating is one of the popular methods to manage leakage power during standby periods in low-power high-speed IC design. It works by using transistor based switches to shut down part of the circuit block and put them in the idle mode. The efficiency of a power gating scheme involves minimum Ioff and high Ion for the sleep transistor. A conventional sleep transistor circuit design requires an additional header, footer, or both switches to turn off the logic block. This additional transistor causes signal delay and increases the chip area. We propose two innovative designs for next generation sleep transistor designs. For an above threshold operation, we present a sleep transistor design based on fully depleted silicon-on-insulator (FDSOI) device. For a subthreshold circuit operation, we implement a sleep transistor utilizing the newly developed silicon-on ferroelectric-insulator field effect transistor (SOFFET). In both of the designs, the ability to control the threshold voltage via bias voltage at the back gate makes both devices more flexible for sleep transistors design than a bulk MOSFET. The proposed approaches simplify the design complexity, reduce the chip area, eliminate the voltage drop by sleep transistor, and improve power dissipation. In addition, the design provides a dynamically controlled Vt for times when the circuit needs to be in a sleep or switching mode.Introduction -- Background and literature review -- Fully integrated on-chip switching voltage regulator -- Hybrid LDO voltage regulator based on cascaded second order multiple feedback loop -- Single and dual output two-stage on-chip power management system -- Sleep transistor design using double-gate FDSOI -- Subthreshold region sleep transistor design -- Conclusio

    Power Efficient Data-Aware SRAM Cell for SRAM-Based FPGA Architecture

    Get PDF
    The design of low-power SRAM cell becomes a necessity in today\u27s FPGAs, because SRAM is a critical component in FPGA design and consumes a large fraction of the total power. The present chapter provides an overview of various factors responsible for power consumption in FPGA and discusses the design techniques of low-power SRAM-based FPGA at system level, device level, and architecture levels. Finally, the chapter proposes a data-aware dynamic SRAM cell to control the power consumption in the cell. Stack effect has been adopted in the design to reduce the leakage current. The various peripheral circuits like address decoder circuit, write/read enable circuits, and sense amplifier have been modified to implement a power-efficient SRAM-based FPGA

    FORCED STACK SLEEP TRANSISTOR (FORTRAN): A NEW LEAKAGE CURRENT REDUCTION APPROACH IN CMOS BASED CIRCUIT DESIGNING

    Get PDF
    Reduction in leakage current has become a significant concern in nanotechnology-based low-power, low-voltage, and high-performance VLSI applications. This research article discusses a new low-power circuit design the approach of FORTRAN (FORced stack sleep TRANsistor), which decreases the leakage power efficiency in the CMOS-based circuit outline in VLSI domain. FORTRAN approach reduces leakage current in both active as well as standby modes of operation. Furthermore, it is not time intensive when the circuit goes from active mode to standby mode and vice-versa. To validate the proposed design approach, experiments are conducted in the Tanner EDA tool of mentor graphics bundle on projected circuit designs for the full adder, a chain of 4-inverters, and 4-bit multiplier designs utilizing 180nm, 130nm, and 90nm TSMC technology node. The outcomes obtained show the result of a 95-98% vital reduction in leakage power as well as a 15-20% reduction in dynamic power with a minor increase in delay. The result outcomes are compared for accuracy with the notable design approaches that are accessible for both active and standby modes of operation

    Design and Analysis of Robust Low Voltage Static Random Access Memories.

    Full text link
    Static Random Access Memory (SRAM) is an indispensable part of most modern VLSI designs and dominates silicon area in many applications. In scaled technologies, maintaining high SRAM yield becomes more challenging since they are particularly vulnerable to process variations due to 1) the minimum sized devices used in SRAM bitcells and 2) the large array sizes. At the same time, low power design is a key focus throughout the semiconductor industry. Since low voltage operation is one of the most effective ways to reduce power consumption due to its quadratic relationship to energy savings, lowering the minimum operating voltage (Vmin) of SRAM has gained significant interest. This thesis presents four different approaches to design and analyze robust low voltage SRAM: SRAM analysis method improvement, SRAM bitcell development, SRAM peripheral optimization, and advance device selection. We first describe a novel yield estimation method for bit-interleaved voltage-scaled 8-T SRAMs. Instead of the traditional trade-off between write and read, the trade-off between write and half select disturb is analyzed. In addition, this analysis proposes a method to find an appropriate Write Word-Line (WWL) pulse width to maximize yield. Second, low leakage 10-T SRAM with speed compensation scheme is proposed. During sleep mode of a sensor application, SRAM retaining data cannot be shut down so it is important to minimize leakage in SRAM. This work adopts several leakage reduction techniques while compensating performance. Third, adaptive write architecture for low voltage 8-T SRAMs is proposed. By adaptively modulating WWL width and voltage level, it is possible to achieve low power consumption while maintaining high yield without excessive performance degradation. Finally, low power circuit design based on heterojunction tunneling transistors (HETTs) is discussed. HETTs have a steep subthreshold swing beneficial for low voltage operation. Device modeling and design of logic and SRAM are proposed.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91569/1/daeyeonk_1.pd

    Enhancing Power Efficient Design Techniques in Deep Submicron Era

    Get PDF
    Excessive power dissipation has been one of the major bottlenecks for design and manufacture in the past couple of decades. Power efficient design has become more and more challenging when technology scales down to the deep submicron era that features the dominance of leakage, the manufacture variation, the on-chip temperature variation and higher reliability requirements, among others. Most of the computer aided design (CAD) tools and algorithms currently used in industry were developed in the pre deep submicron era and did not consider the new features explicitly and adequately. Recent research advances in deep submicron design, such as the mechanisms of leakage, the source and characterization of manufacture variation, the cause and models of on-chip temperature variation, provide us the opportunity to incorporate these important issues in power efficient design. We explore this opportunity in this dissertation by demonstrating that significant power reduction can be achieved with only minor modification to the existing CAD tools and algorithms. First, we consider peak current, which has become critical for circuit's reliability in deep submicron design. Traditional low power design techniques focus on the reduction of average power. We propose to reduce peak current while keeping the overhead on average power as small as possible. Second, dual Vt technique and gate sizing have been used simultaneously for leakage savings. However, this approach becomes less effective in deep submicron design. We propose to use the newly developed process-induced mechanical stress to enhance its performance. Finally, in deep submicron design, the impact of on-chip temperature variation on leakage and performance becomes more and more significant. We propose a temperature-aware dual Vt approach to alleviate hot spots and achieve further leakage reduction. We also consider this leakage-temperature dependency in the dynamic voltage scaling approach and discover that a commonly accepted result is incorrect for the current technology. We conduct extensive experiments with popular design benchmarks, using the latest industry CAD tools and design libraries. The results show that our proposed enhancements are promising in power saving and are practical to solve the low power design challenges in deep submicron era

    Power-Gated Differential Logic Style Based on Double-Gate Controllable-Polarity Transistors

    Get PDF
    This brief presents a novel power-gating technique for differential cascade voltage switch logic (DCVSL) based on double-gate (DG) controllable-polarity field-effect transistors (FETs). DG controllable-polarity FETs, commonly referred to as ambipolar transistors, are devices whose polarity is online reconfigurable by changing the second gate bias. In this brief, we exploit the online control of ambipolar device polarity to achieve intrinsically power-gated DCVSL circuits bypassing the use of series sleep transistors. We perform circuit-level simulations and comparisons at 22-nm technology node, considering silicon nanowire-based DG controllable-polarity FETs. Experimental results show that ambipolar DCVSL circuits power gated by the proposed technique have on average 6times6times smaller standby power with only 1.1times1.1times timing penalty with respect to their non-power-gated versions. As compared with unipolar FinFET-based realizations, our proposal is capable to reduce up to 1.9times1.9times the standby power consumption of a low-standby-power process and, at the same time, increase up to 10% the performance of a high-performance process

    Subthreshold and gate leakage current analysis and reduction in VLSI circuits

    Get PDF
    CMOS technology has scaled aggressively over the past few decades in an effort to enhance functionality, speed and packing density per chip. As the feature sizes are scaling down to sub-100nm regime, leakage power is increasing significantly and is becoming the dominant component of the total power dissipation. Major contributors to the total leakage current in deep submicron regime are subthreshold and gate tunneling leakage currents. The leakage reduction techniques developed so far were mostly devoted to reducing subthreshold leakage. However, at sub-65nm feature sizes, gate leakage current grows faster and is expected to surpass subthreshold leakage current. In this work, an extensive analysis of the circuit level characteristics of subthreshold and gate leakage currents is performed at 45nm and 32nm feature sizes. The analysis provides several key observations on the interdependency of gate and subthreshold leakage currents. Based on these observations, a new leakage reduction technique is proposed that optimizes both the leakage currents. This technique identifies minimum leakage vectors for a given circuit based on the number of transistors in OFF state and their position in the stack. The effectiveness of the proposed technique is compared to most of the mainstream leakage reduction techniques by implementing them on ISCAS89 benchmark circuits. The proposed leakage reduction technique proved to be more effective in reducing gate leakage current than subthreshold leakage current. However, when combined with dual-threshold and variable-threshold CMOS techniques, substantial subthreshold leakage current reduction was also achieved. A total savings of 53% for subthreshold leakage current and 26% for gate leakage current are reported

    Improved Techniques for High Performance Noise-Tolerant Domino CMOS Logic Circuits

    Get PDF
    Domino CMOS logic circuit family finds a wide variety of applications in microprocessors, digital signal processors, and dynamic memory due to their high speed and low device count. However, there are inevitable problems that degrade the noise immunity of this family; they are the inevitable leakage current and the charge sharing. Added to the drawbacks is the relatively large power consumption, especially if compared to the static complementary CMOS logic family. To make the matter worse, these drawbacks are more tactile with the scaling of CMOS technology. In my thesis, An introduction to domino logic, The impact of CMOS technology scaling on the performance of domino CMOS logic, Three Phase Domino Logic Circuit, High-performance noise-tolerant circuit techniques for CMOS dynamic logic and other Domino logic techniques are studied and corresponding Domino logic techniques have been designed and simulated. Specifically, the need to decrease the dynamic power consumption forces the designer to use a lower power-supply voltage. This in turn necessitates the reduction of threshold voltage to maintain the performance with the associated increase in sub threshold leakage current. So, a properly sized PMOS keeper must be used to compensate for this leakage. It will be found that the speed, which is the major advantage of domino logic compared to other logic styles, will degrade with CMOS technology scaling due to the contention current of the keeper. To assure high performance in noise tolerant techniques, the inevitable effects like leakage currents and charge distribution have to be minimized. In this thesis few modifications have also been made to already existing domino techniques and different Domino logic circuits are simulated in both Cadence virtuoso (implemented using GPDK090- library of 90nm technology) and Mentor graphics (implemented at different technologies like Tsmc 035.mod, Tsmc 025.mod, Tsmc 018.mod) environments. The performance parameters are also compared with other standard architectures of Domino logic
    corecore