616 research outputs found

    A Low-Voltage, Low-Power 4-bit BCD Adder, designed using the Clock Gated Power Gating, and the DVT Scheme

    Full text link
    This paper proposes a Low-Power, Energy Efficient 4-bit Binary Coded Decimal (BCD) adder design where the conventional 4-bit BCD adder has been modified with the Clock Gated Power Gating Technique. Moreover, the concept of DVT (Dual-vth) scheme has been introduced while designing the full adder blocks to reduce the Leakage Power, as well as, to maintain the overall performance of the entire circuit. The reported architecture of 4-bit BCD adder is designed using 45 nm technology and it consumes 1.384 {\mu}Watt of Average Power while operating with a frequency of 200 MHz, and a Supply Voltage (Vdd) of 1 Volt. The results obtained from different simulation runs on SPICE, indicate the superiority of the proposed design compared to the conventional 4-bit BCD adder. Considering the product of Average Power and Delay, for the operating frequency of 200 MHz, a fair 47.41 % reduction compared to the conventional design has been achieved with this proposed scheme.Comment: To appear in the proceedings of 2013 IEEE International Conference on Signal Processing, Computing and Control (ISPCC,13

    Low Power Processor Architectures and Contemporary Techniques for Power Optimization – A Review

    Get PDF
    The technological evolution has increased the number of transistors for a given die area significantly and increased the switching speed from few MHz to GHz range. Such inversely proportional decline in size and boost in performance consequently demands shrinking of supply voltage and effective power dissipation in chips with millions of transistors. This has triggered substantial amount of research in power reduction techniques into almost every aspect of the chip and particularly the processor cores contained in the chip. This paper presents an overview of techniques for achieving the power efficiency mainly at the processor core level but also visits related domains such as buses and memories. There are various processor parameters and features such as supply voltage, clock frequency, cache and pipelining which can be optimized to reduce the power consumption of the processor. This paper discusses various ways in which these parameters can be optimized. Also, emerging power efficient processor architectures are overviewed and research activities are discussed which should help reader identify how these factors in a processor contribute to power consumption. Some of these concepts have been already established whereas others are still active research areas. © 2009 ACADEMY PUBLISHER

    Modeling DVFS and Power-Gating Actuators for Cycle-Accurate NoC-Based Simulators

    Get PDF
    Networks-on-chip (NoCs) are a widely recognized viable interconnection paradigm to support the multi-core revolution. One of the major design issues of multicore architectures is still the power, which can no longer be considered mainly due to the cores, since the NoC contribution to the overall energy budget is relevant. To face both static and dynamic power while balancing NoC performance, different actuators have been exploited in literature, mainly dynamic voltage frequency scaling (DVFS) and power gating. Typically, simulation-based tools are employed to explore the huge design space by adopting simplified models of the components. As a consequence, the majority of state-of-the-art on NoC power-performance optimization do not accurately consider timing and power overheads of actuators, or (even worse) do not consider them at all, with the risk of overestimating the benefits of the proposed methodologies. This article presents a simulation framework for power-performance analysis of multicore architectures with specific focus on the NoC. It integrates accurate power gating and DVFS models encompassing also their timing and power overheads. The value added of our proposal is manyfold: (i) DVFS and power gating actuators are modeled starting from SPICE-level simulations; (ii) such models have been integrated in the simulation environment; (iii) policy analysis support is plugged into the framework to enable assessment of different policies; (iv) a flexible GALS (globally asynchronous locally synchronous) support is provided, covering both handshake and FIFO re-synchronization schemas. To demonstrate both the flexibility and extensibility of our proposal, two simple policies exploiting the modeled actuators are discussed in the article

    Embracing Low-Power Systems with Improvement in Security and Energy-Efficiency

    Get PDF
    As the economies around the world are aligning more towards usage of computing systems, the global energy demand for computing is increasing rapidly. Additionally, the boom in AI based applications and services has already invited the pervasion of specialized computing hardware architectures for AI (accelerators). A big chunk of research in the industry and academia is being focused on providing energy efficiency to all kinds of power hungry computing architectures. This dissertation adds to these efforts. Aggressive voltage underscaling of chips is one the effective low power paradigms of providing energy efficiency. This dissertation identifies and deals with the reliability and performance problems associated with this paradigm and innovates novel energy efficient approaches. Specifically, the properties of a low power security primitive have been improved and, higher performance has been unlocked in an AI accelerator (Google TPU) in an aggressively voltage underscaled environment. And, novel power saving opportunities have been unlocked by characterizing the usage pattern of a baseline TPU with rigorous mathematical analysis

    Scalable Analysis, Verification and Design of IC Power Delivery

    Get PDF
    Due to recent aggressive process scaling into the nanometer regime, power delivery network design faces many challenges that set more stringent and specific requirements to the EDA tools. For example, from the perspective of analysis, simulation efficiency for large grids must be improved and the entire network with off-chip models and nonlinear devices should be able to be analyzed. Gated power delivery networks have multiple on/off operating conditions that need to be fully verified against the design requirements. Good power delivery network designs not only have to save the wiring resources for signal routing, but also need to have the optimal parameters assigned to various system components such as decaps, voltage regulators and converters. This dissertation presents new methodologies to address these challenging problems. At first, a novel parallel partitioning-based approach which provides a flexible network partitioning scheme using locality is proposed for power grid static analysis. In addition, a fast CPU-GPU combined analysis engine that adopts a boundary-relaxation method to encompass several simulation strategies is developed to simulate power delivery networks with off-chip models and active circuits. These two proposed analysis approaches can achieve scalable simulation runtime. Then, for gated power delivery networks, the challenge brought by the large verification space is addressed by developing a strategy that efficiently identifies a number of candidates for the worst-case operating condition. The computation complexity is reduced from O(2^N) to O(N). At last, motivated by a proposed two-level hierarchical optimization, this dissertation presents a novel locality-driven partitioning scheme to facilitate divide-and-conquer-based scalable wire sizing for large power delivery networks. Simultaneous sizing of multiple partitions is allowed which leads to substantial runtime improvement. Moreover, the electric interactions between active regulators/converters and passive networks and their influences on key system design specifications are analyzed comprehensively. With the derived design insights, the system-level co-design of a complete power delivery network is facilitated by an automatic optimization flow. Results show significant performance enhancement brought by the co-design

    Power Efficient Data-Aware SRAM Cell for SRAM-Based FPGA Architecture

    Get PDF
    The design of low-power SRAM cell becomes a necessity in today\u27s FPGAs, because SRAM is a critical component in FPGA design and consumes a large fraction of the total power. The present chapter provides an overview of various factors responsible for power consumption in FPGA and discusses the design techniques of low-power SRAM-based FPGA at system level, device level, and architecture levels. Finally, the chapter proposes a data-aware dynamic SRAM cell to control the power consumption in the cell. Stack effect has been adopted in the design to reduce the leakage current. The various peripheral circuits like address decoder circuit, write/read enable circuits, and sense amplifier have been modified to implement a power-efficient SRAM-based FPGA

    Design of Low Power Data Preserving Flip Flop Using MTCMOS Technique

    Full text link
    In order to reduce overall power consumption, a well-known technique is to scale supply voltages. However, to maintain performance, device threshold voltages must scale as well, which will cause sub threshold leakage currents to increase exponentially. The sub threshold voltage has to affect the two parameters one is the delay and other one is the sub threshold leakage current. Smaller the threshold voltage smaller will be delay while larger will be the sub threshold current. Controlling sub threshold leakage has been explored significantly in the literature, especially in the context of reducing leakage currents in burst mode type circuits, where the system spends the majority of the time in an idle standby, or sleep, state where no computation is taking place. MTCMOS or multi-threshold CMOS has been proposed as a very effective technique for reducing leakage currents during the standby by state by utilizing high sleep devices to gate the power supplies of a low logic block. Although MTCMOS circuit techniques are effective for controlling leakage currents in combinational logic, a drawback is that it can cause internal nodes to float, and cannot be directly used in standard memory cells without corrupting stored data. As a result, several researchers have explored possible MTCMOS latch designs that can reduce leakage currents yet maintain state during the standby modes. In this work a data preserving flip flop with reduced leakage power is designed using MTCMOS technique in 90nm technology with the help of CADENCE tool. The simulation results have shown that the leakage power is reduced by 25.70% compared to CMOS flip flop

    Research on low power technology by AC power supply circuits

    Get PDF
    制度:新 ; 報告番号:甲3692号 ; 学位の種類:博士(工学) ; 授与年月日:2012/9/15 ; 早大学位記番号:新6060Waseda Universit
    corecore