7,701 research outputs found

    A Combined Gate Replacement and Input Vector Control Approach for Leakage Current Reduction

    Get PDF
    Input vector control (IVC) is a popular technique for leakage power reduction. It utilizes the transistor stack effect in CMOS gates by applying a minimum leakage vector (MLV) to the primary inputs of combinational circuits during the standby mode. However, the IVC technique becomes less effective for circuits of large logic depth because the input vector at primary inputs has little impact on leakage of internal gates at high logic levels. In this paper, we propose a technique to overcome this limitation by replacing those internal gates in their worst leakage states by other library gates while maintaining the circuit’s correct functionality during the active mode. This modification of the circuit does not require changes of the design flow, but it opens the door for further leakage reduction when the MLV is not effective. We then present a divide-and-conquer approach that integrates gate replacement, an optimal MLV searching algorithm for tree circuits, and a genetic algorithm to connect the tree circuits. Our experimental results on all the MCNC91 benchmark circuits reveal that 1) the gate replacement technique alone can achieve 10% leakage current reduction over the best known IVC methods with no delay penalty and little area increase; 2) the divide-and-conquer approach outperforms the best pure IVC method by 24% and the existing control point insertion method by 12%; and 3) compared with the leakage achieved by optimal MLV in small circuits, the gate replacement heuristic and the divide-and-conquer approach can reduce on average 13% and 17% leakage, respectively

    A Combined Gate Replacement and Input Vector Control Approach

    Get PDF
    Due to the increasing role of leakage power in CMOS circuit's total power dissipation, leakage reduction has attracted a lot of attention recently. Input vector control (IVC) takes advantage of the transistor stack effect to apply the minimum leakage vector (MLV) to the primary inputs of the circuit during the standby mode. However, IVC techniques become less effective for circuits of large logic depth because theMLV at primary inputs has little impact on internal gates at high logic level. In this paper, we propose a technique to overcome this limitation by directly controlling the inputs to the internal gates that are in their worst leakage states. Specifically, we propose a gate replacement technique that replaces such gates by other library gates while maintaining the circuit's correct functionality at the active mode. This modification of the circuit does not require changes of the design flow, but it opens the door for further leakage reduction, when the MLV is not effective. We then describe a divideand- conquer approach that combines the gate replacement and input vector control techniques. It integrates an algorithm that finds the optimal MLV for tree circuits, a fast gate replacement heuristic, and a genetic algorithm that connects the tree circuits. We have conducted experiments on all the MCNC91 benchmark circuits. The results reveal that 1) the gate replacement technique itself can provide 10% more leakage current reduction over the best known IVC methods with no delay penalty and little area increase; 2) the divide-and-conquer approach outperforms the best pure IVC method by 24% and the existing control point insertion method by 12%; 3) when we obtain the optimal MLV for small circuits from exhaustive search, the proposed gate replacement alone can still reduce leakage current by 13% while the divide-and-conquer approach reduces 17%

    Leakage Power Reduction Techniques in Deep Submicron Technologies for VLSI Applications

    Get PDF
    AbstractThe leakage power dissipation has become one of the most challenging issues in low power VLSI circuit designs especially with on-chip devices as it doubles for every two years[4]-[5]. The scaling down of threshold voltage has contributed enormously towards increase in subthreshold leakage current thereby making the static (leakage) power dissipation very high. According to International Technology Roadmap for Semiconductors (ITRS), the total power dissipation may be significantly contributed by leakage power dissipation [1]. The battery operated devices with long duration in standby mode may be drained out very quickly due to the leakage power. In CMOS submicron technologies, leakage power dissipation plays a significant role. However, various low power design techniques for efficient minimization of leakage power are proposed in the literature review. A comprehensive study and analysis of various leakage power minimization techniques have been presented in this paper. The present research study and its corresponding analysis are mainly focusing on circuit performance parameters. It is implied from the current literature that only an appropriate choice of leakage power minimization technique for a specific application can be effectively carried by a VLSI circuit designer based on sequential analytical approach

    Particle Swarm Optimization Algorithm for Leakage Power Reduction in VLSI Circuits

    Get PDF
     Leakage power is the dominant source of power dissipation innanometer technology. As per the International Technology Roadmap forSemiconductors (ITRS) static power dominates dynamic power with theadvancement in technology. One of the well-known techniques used forleakage reduction is Input Vector Control (IVC). Due to stacking effect inIVC, it gives less leakage for the Minimum Leakage Vector (MLV) appliedat inputs of test circuit. This paper introduces Particle Swarm Optimization(PSO) algorithm to the field of VLSI to find minimum leakage vector.Another optimization algorithm called Genetic algorithm (GA) is alsoimplemented to search MLV and compared with PSO in terms of number ofiterations. The proposed approach is validated by simulating few testcircuits. Both GA and PSO algorithms are implemented in Verilog HDLand the simulations are carried out using Xilinx 9.2i. From the simulationresults it is found that PSO based approach is best in finding MLVcompared to Genetic based implementation as PSO technique uses lessruntime compared to GA. To the best of the author’s knowledge PSOalgorithm is used in IVC technique to optimize power for the first time andit is quite successful in searching MLV

    The impact of design techniques in the reduction of power consumption of SoCs Multimedia

    Get PDF
    Orientador: Guido Costa Souza de AraújoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A indústria de semicondutores sempre enfrentou fortes demandas em resolver problema de dissipação de calor e reduzir o consumo de energia em dispositivos. Esta tendência tem sido intensificada nos últimos anos com o movimento de sustentabilidade ambiental. A concepção correta de um sistema eletrônico de baixo consumo de energia é um problema de vários níveis de complexidade e exige estratégias sistemáticas na sua construção. Fora disso, a adoção de qualquer técnica de redução de energia sempre está vinculada com objetivos especiais e provoca alguns impactos no projeto. Apesar dos projetistas conheçam bem os impactos de forma qualitativa, as detalhes quantitativas ainda são incógnitas ou apenas mantidas dentro do 'know-how' das empresas. Neste trabalho, de acordo com resultados experimentais baseado num plataforma de SoC1 industrial, tentamos quantificar os impactos derivados do uso de técnicas de redução de consumo de energia. Nos concentramos em relacionar o fator de redução de energia de cada técnica aos impactos em termo de área, desempenho, esforço de implementação e verificação. Na ausência desse tipo de dados, que relacionam o esforço de engenharia com as metas de consumo de energia, incertezas e atrasos serão frequentes no cronograma de projeto. Esperamos que este tipo de orientações possam ajudar/guiar os arquitetos de projeto em selecionar as técnicas adequadas para reduzir o consumo de energia dentro do alcance de orçamento e cronograma de projetoAbstract: The semiconductor industry has always faced strong demands to solve the problem of heat dissipation and reduce the power consumption in electronic devices. This trend has been increased in recent years with the action of environmental sustainability. The correct conception of an electronic system for low power consumption is an issue with multiple levels of complexities and requires systematic approaches in its construction. However, the adoption of any technique for reducing the power consumption is always linked with some specific goals and causes some impacts on the project. Although the designers know well that these impacts can affect the design in a quality aspect, the quantitative details are still unkown or just be kept inside the company's know-how. In this work, according to the experimental results based on an industrial SoC2 platform, we try to quantify the impacts of the use of low power techniques. We will relate the power reduction factor of each technique to the impact in terms of area, performance, implementation and verification effort. In the absence of such data, which relates the engineering effort to the goals of power consumption, uncertainties and delays are frequent. We hope that such guidelines can help/guide the project architects in selecting the appropriate techniques to reduce the power consumption within the limit of budget and project scheduleMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Design Space Re-Engineering for Power Minimization in Modern Embedded Systems

    Get PDF
    Power minimization is a critical challenge for modern embedded system design. Recently, due to the rapid increase of system's complexity and the power density, there is a growing need for power control techniques at various design levels. Meanwhile, due to technology scaling, leakage power has become a significant part of power dissipation in the CMOS circuits and new techniques are needed to reduce leakage power. As a result, many new power minimization techniques have been proposed such as voltage island, gate sizing, multiple supply and threshold voltage, power gating and input vector control, etc. These design options further enlarge the design space and make it prohibitively expensive to explore for the most energy efficient design solution. Consequently, heuristic algorithms and randomized algorithms are frequently used to explore the design space, seeking sub-optimal solutions to meet the time-to-market requirements. These algorithms are based on the idea of truncating the design space and restricting the search in a subset of the original design space. While this approach can effectively reduce the runtime of searching, it may also exclude high-quality design solutions and cause design quality degradation. When the solution to one problem is used as the base for another problem, such solution quality degradation will accumulate. In modern electronics system design, when several such algorithms are used in series to solve problems in different design levels, the final solution can be far off the optimal one. In my Ph.D. work, I develop a {\em re-engineering} methodology to facilitate exploring the design space of power efficient embedded systems design. The direct goal is to enhance the performance of existing low power techniques. The methodology is based on the idea that design quality can be improved via iterative ``re-shaping'' the design space based on the ``bad'' structure in the obtained design solutions; the searching run-time can be reduced by the guidance from previous exploration. This approach can be described in three phases: (1) apply the existing techniques to obtain a sub-optimal solution; (2) analyze the solution and expand the design space accordingly; and (3) re-apply the technique to re-explore the enlarged design space. We apply this methodology at different levels of embedded system design to minimize power: (i) switching power reduction in sequential logic synthesis; (ii) gate-level static leakage current reduction; (iii) dual threshold voltage CMOS circuits design; and (iv) system-level energy-efficient detection scheme for wireless sensor networks. An extensive amount of experiments have been conducted and the results have shown that this methodology can effectively enhance the power efficiency of the existing embedded system design flows with very little overhead

    CAD Tools for Synthesis of Sleep Convention Logic

    Get PDF
    This dissertation proposes an automated flow for the Sleep Convention Logic (SCL) asynchronous design style. The proposed flow synthesizes synchronous RTL into an SCL netlist. The flow utilizes commercial design tools, while supplementing missing functionality using custom tools. A method for determining the performance bottleneck in an SCL design is proposed. A constraint-driven method to increase the performance of linear SCL pipelines is proposed. Several enhancements to SCL are proposed, including techniques to reduce the number of registers and total sleep capacitance in an SCL design

    Simulation study of scaling design, performance characterization, statistical variability and reliability of decananometer MOSFETs

    Get PDF
    This thesis describes a comprehensive, simulation based scaling study – including device design, performance characterization, and the impact of statistical variability – on deca-nanometer bulk MOSFETs. After careful calibration of fabrication processes and electrical characteristics for n- and p-MOSFETs with 35 nm physical gate length, 1 nm EOT and stress engineering, the simulated devices closely match the performance of contemporary 45 nm CMOS technologies. Scaling to 25 nm, 18 nm and 13 nm gate length n and p devices follows generalized scaling rules, augmented by physically realistic constraints and the introduction of high-k/metal-gate stacks. The scaled devices attain the performance stipulated by the ITRS. Device a.c. performance is analyzed, at device and circuit level. Extrinsic parasitics become critical to nano-CMOS device performance. The thesis describes device capacitance components, analyzes the CMOS inverter, and obtains new insights into the inverter propagation delay in nano-CMOS. The projection of a.c. performance of scaled devices is obtained. The statistical variability of electrical characteristics, due to intrinsic parameter fluctuation sources, in contemporary and scaled decananometer MOSFETs is systematically investigated for the first time. The statistical variability sources: random discrete dopants, gate line edge roughness and poly-silicon granularity are simulated, in combination, in an ensemble of microscopically different devices. An increasing trend in the standard deviation of the threshold voltage as a function of scaling is observed. The introduction of high-k/metal gates improves electrostatic integrity and slows this trend. Statistical evaluations of variability in Ion and Ioff as a function of scaling are also performed. For the first time, the impact of strain on statistical variability is studied. Gate line edge roughness results in areas of local channel shortening, accompanied by locally increased strain, both effects increasing the local current. Variations are observed in both the drive current, and in the drive current enhancement normally expected from the application of strain. In addition, the effects of shallow trench isolation (STI) on MOSFET performance and on its statistical variability are investigated for the first time. The inverse-narrow-width effect of STI enhances the current density adjacent to it. This leads to a local enhancement of the influence of junction shapes adjacent to the STI. There is also a statistical impact on the threshold voltage due to random STI induced traps at the silicon/oxide interface

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing
    • …
    corecore