55 research outputs found
Recommended from our members
Variability-aware low-power techniques for nanoscale mixed-signal circuits.
New circuit design techniques that accommodate lower supply voltages necessary for portable systems need to be integrated into the semiconductor intellectual property (IP) core. Systems that once worked at 3.3 V or 2.5 V now need to work at 1.8 V or lower, without causing any performance degradation. Also, the fluctuation of device characteristics caused by process variation in nanometer technologies is seen as design yield loss. The numerous parasitic effects induced by layouts, especially for high-performance and high-speed circuits, pose a problem for IC design. Lack of exact layout information during circuit sizing leads to long design iterations involving time-consuming runs of complex tools. There is a strong need for low-power, high-performance, parasitic-aware and process-variation-tolerant circuit design. This dissertation proposes methodologies and techniques to achieve variability, power, performance, and parasitic-aware circuit designs. Three approaches are proposed: the single iteration automatic approach, the hybrid Monte Carlo and design of experiments (DOE) approach, and the corner-based approach. Widely used mixed-signal circuits such as analog-to-digital converter (ADC), voltage controlled oscillator (VCO), voltage level converter and active pixel sensor (APS) have been designed at nanoscale complementary metal oxide semiconductor (CMOS) and subjected to the proposed methodologies. The effectiveness of the proposed methodologies has been demonstrated through exhaustive simulations. Apart from these methodologies, the application of dual-oxide and dual-threshold techniques at circuit level in order to minimize power and leakage is also explored
ILP-based Supply and Threshold Voltage Assignment For Total Power Minimization
In this paper we present an ILP-based method to simultaneously assign supply and threshold voltages to individual gates for
dynamic and leakage power minimization. In our three-step approach, low power min-flipflop (FF) retiming is first performed to
reduce the clock period while taking the FF delay/power into consideration. Next, the subsequent voltage assignment formulated
in ILP makes the best possible supply/threshold voltage assignment under the given clock period constraint set by the retiming.
Finally, a post-process further refines the voltage assignment solution by exploiting the remaining timing slack in the circuit.
Related experiments show that the min-FF retiming plus simultaneous Vdd/Vth assignment approach outperforms the existing
max-FF retiming plus Vdd-only assignment approach
Design Space Re-Engineering for Power Minimization in Modern Embedded Systems
Power minimization is a critical challenge for modern embedded system design. Recently, due to the rapid increase of system's complexity and
the power density, there is a growing need for power control techniques at various design levels. Meanwhile, due to technology scaling, leakage power has become a significant part of power dissipation in the CMOS circuits and new techniques are needed to reduce leakage power.
As a result, many new power minimization techniques have been proposed such as voltage island, gate sizing, multiple supply and threshold voltage, power gating and input vector control, etc. These design options further
enlarge the design space and make it prohibitively expensive to explore
for the most energy efficient design solution.
Consequently, heuristic algorithms and randomized
algorithms are frequently used to explore the design space, seeking sub-optimal
solutions to meet the time-to-market requirements. These algorithms are
based on the idea of truncating the design space
and restricting the search in a subset of the original design space. While this approach can effectively reduce the runtime of searching, it may also exclude high-quality design solutions and cause design quality degradation.
When the solution to one problem is used as the base for another problem, such solution quality degradation will accumulate. In modern electronics system design, when several such algorithms are used in series to solve problems in different design levels, the final solution can be far off the optimal one.
In my Ph.D. work, I develop a {\em re-engineering} methodology to
facilitate exploring the design space of power efficient embedded systems design.
The direct goal is to enhance the performance of existing low power
techniques. The methodology is based on the idea that design quality can be improved
via iterative ``re-shaping'' the design space based on the ``bad'' structure
in the obtained design solutions; the searching run-time can be reduced
by the guidance from previous exploration. This approach can be
described in three phases: (1) apply the existing techniques to obtain
a sub-optimal solution; (2) analyze the solution and expand the design
space accordingly; and (3) re-apply the technique to re-explore the
enlarged design space.
We apply this methodology at different levels of embedded system design to
minimize power: (i) switching power reduction in sequential logic synthesis;
(ii) gate-level static leakage current reduction; (iii) dual threshold voltage CMOS
circuits design; and (iv) system-level energy-efficient detection scheme for
wireless sensor networks. An extensive amount of experiments have been conducted
and the results have shown that this methodology can effectively enhance
the power efficiency of the existing embedded system design flows with very little
overhead
Reliability in the face of variability in nanometer embedded memories
In this thesis, we have investigated the impact of parametric variations on the behaviour of one performance-critical processor structure - embedded memories. As variations manifest as a spread in power and performance, as a first step, we propose a novel modeling methodology that helps evaluate the impact of circuit-level optimizations on architecture-level design choices. Choices made at the design-stage ensure conflicting requirements from higher-levels are decoupled. We then complement such design-time optimizations with a runtime mechanism that takes advantage of adaptive body-biasing to lower power whilst improving performance in the presence of variability. Our proposal uses a novel fully-digital variation tracking hardware using embedded DRAM (eDRAM) cells to monitor run-time changes in cache latency and leakage. A special fine-grain body-bias generator uses the measurements to generate an optimal body-bias that is needed to meet the required yield targets. A novel variation-tolerant and soft-error hardened eDRAM cell is also proposed as an alternate candidate for replacing existing SRAM-based designs in latency critical memory structures. In the ultra low-power domain where reliable operation is limited by the minimum voltage of operation (Vddmin), we analyse the impact of failures on cache functional margin and functional yield. Towards this end, we have developed a fully automated tool (INFORMER) capable of estimating memory-wide metrics such as power, performance and yield accurately and rapidly. Using the developed tool, we then evaluate the #effectiveness of a new class of hybrid techniques in improving cache yield through failure prevention and correction. Having a holistic perspective of memory-wide metrics helps us arrive at design-choices optimized simultaneously for multiple metrics needed for maintaining lifetime requirements
Leakage Power Modeling and Reduction Techniques for Field Programmable Gate Arrays
FPGAs have become quite popular for implementing digital circuits and systems because of reduced costs and fast design cycles. This has led to increased complexity of FPGAs, and with technology scaling, many new challenges have come up for the FPGA industry, leakage power being one of the key challenges. The current generation FPGAs are being implemented in 90nm technology, therefore, managing leakage power in deep-submicron FPGAs has become critical for the FPGA industry to remain competitive in the semiconductor market and to enter the mobile applications domain. In this work an analytical state dependent leakage power model for FPGAs is developed, followed by dual-Vt based designs of the FPGA architecture for reducing leakage power. The leakage power model computes subthreshold and gate leakage in FPGAs, since these are the two dominant components of total leakage power in the scaled nanometer technologies. The leakage power model takes into account the dependency of gate and subthreshold leakage on the state of the circuit inputs. The leakage power model has two main components, one which computes the probability of a state for a particular FPGA circuit element, and the other which computes the leakage of the FPGA circuit element for a given input using analytical equations. This FPGA power model is particularly important for rapidly analyzing various FPGA architectures across different technology nodes. Dual-Vt based designs of the FPGA architecture are proposed, developed, and evaluated, for reducing the leakage power using a CAD framework. The logic and the routing resources of the FPGA are considered for dual-Vt assignment. The number of the logic elements that can be assigned high-Vt in the ideal case by using a dual-Vt assignment algorithm in the CAD framework is estimated. Based upon this estimate two kinds of architectures are developed and evaluated, homogeneous and heterogeneous architectures. Results indicate that leakage power savings of up to 50% can be obtained from these architectures. The analytical state dependent leakage power model developed has been used for estimating the leakage power savings from the dual-Vt FPGA architectures. The CAD framework that has been developed can also be used for developing and evaluating different dual-Vt FPGA architectures, other than the ones proposed in this work
Power Management for Deep Submicron Microprocessors
As VLSI technology scales, the enhanced performance of smaller transistors comes at the expense of increased power consumption. In addition to the dynamic power consumed by the circuits there is a tremendous increase in the leakage power consumption which is further exacerbated by the increasing operating temperatures. The total power consumption of modern processors is distributed between the processor core, memory and interconnects. In this research two novel power management techniques are presented targeting the functional units and the global interconnects.
First, since most leakage control schemes for processor functional units are based on circuit level techniques, such schemes inherently lack information about the operational profile of higher-level components of the system. This is a barrier to the pivotal task of predicting standby time. Without this prediction, it is extremely difficult to assess the value of any leakage control scheme. Consequently, a methodology that can predict the standby time is highly beneficial in bridging the gap between the information available at the application level and the circuit implementations.
In this work, a novel Dynamic Sleep Signal Generator (DSSG) is presented. It utilizes the usage traces extracted from cycle accurate simulations of benchmark programs to predict the long standby periods associated with the various functional units. The DSSG bases its decisions on the current and previous standby state of the functional units to accurately predict the length of the next standby period. The DSSG presents an alternative to Static Sleep Signal Generation (SSSG) based on static counters that trigger the generation of the sleep signal when the functional units idle for a prespecified number of cycles.
The test results of the DSSG are obtained by the use of a modified RISC superscalar processor, implemented by SimpleScalar, the most widely accepted open source vehicle for architectural analysis. In addition, the results are further verified by a Simultaneous Multithreading simulator implemented by SMTSIM. Leakage saving results shows an increase of up to 146% in leakage savings using the DSSG versus the SSSG, with an accuracy of 60-80% for predicting long standby periods.
Second, chip designers in their effort to achieve timing closure, have focused on achieving the lowest possible interconnect delay through buffer insertion and routing techniques. This approach, though, taxes the power budget of modern ICs, especially those intended for wireless applications. Also, in order to achieve more functionality, die sizes are constantly increasing. This trend is leading to an increase in the average global interconnect length which, in turn, requires more buffers to achieve timing closure. Unconstrained buffering is bound to adversely affect the overall chip performance, if the power consumption is added as a major performance metric. In fact, the number of global interconnect buffers is expected to reach hundreds of thousands to achieve an appropriate timing closure.
To mitigate the impact of the power consumed by the interconnect buffers, a power-efficient multi-pin routing technique is proposed in this research. The problem is based on a graph representation of the routing possibilities, including buffer insertion and identifying the least power path between the interconnect source and set of sinks.
The novel multi-pin routing technique is tested by applying it to the ISPD and IBM benchmarks to verify the accuracy, complexity, and solution quality. Results obtained indicate that an average power savings as high as 32% for the 130-nm technology is achieved with no impact on the maximum chip frequency
Minimizing and exploiting leakage in VLSI
Power consumption of VLSI (Very Large Scale Integrated) circuits has been growing at
an alarmingly rapid rate. This increase in power consumption, coupled with the increasing
demand for portable/hand-held electronics, has made power consumption a dominant
concern in the design of VLSI circuits today. Traditionally dynamic (switching) power has
dominated the total power consumption of VLSI circuits. However, due to process scaling
trends, leakage power has now become a major component of the total power consumption
in VLSI circuits. This dissertation explores techniques to reduce leakage, as well as
techniques to exploit leakage currents through the use of sub-threshold circuits.
This dissertation consists of two studies. In the first study, techniques to reduce leakage
are presented. These include a low leakage ASIC design methodology that uses high
VT sleep transistors selectively, a methodology that combines input vector control and circuit
modification, and a scheme to find the optimum reverse body bias voltage to minimize
leakage.
As the minimum feature size of VLSI fabrication processes continues to shrink with
each successive process generation (along with the value of supply voltage and therefore the
threshold voltage of the devices), leakage currents increase exponentially. Leakage currents
are hence seen as a necessary evil in traditional VLSI design methodologies. We present
an approach to turn this problem into an opportunity. In the second study in this dissertation,
we attempt to exploit leakage currents to perform computation. We use sub-threshold
digital circuits and come up with ways to get around some of the pitfalls associated with sub-threshold circuit design. These include a technique that uses body biasing adaptively
to compensate for Process, Voltage and Temperature (PVT) variations, a design approach
that uses asynchronous micro-pipelined Network of Programmable Logic Arrays (NPLAs)
to help improve the throughput of sub-threshold designs, and a method to find the optimum
supply voltage that minimizes energy consumption in a circuit
Parametric Yield of VLSI Systems under Variability: Analysis and Design Solutions
Variability has become one of the vital challenges that the
designers of integrated circuits encounter. variability becomes
increasingly important. Imperfect manufacturing process manifest
itself as variations in the design parameters. These variations
and those in the operating environment of VLSI circuits result in
unexpected changes in the timing, power, and reliability of the
circuits. With scaling transistor dimensions, process and
environmental variations become significantly important in the
modern VLSI design. A smaller feature size means that the physical
characteristics of a device are more prone to these
unaccounted-for changes. To achieve a robust design, the random
and systematic fluctuations in the manufacturing process and the
variations in the environmental parameters should be analyzed and
the impact on the parametric yield should be addressed.
This thesis studies the challenges and comprises solutions for
designing robust VLSI systems in the presence of variations.
Initially, to get some insight into the system design under
variability, the parametric yield is examined for a small circuit.
Understanding the impact of variations on the yield at the circuit
level is vital to accurately estimate and optimize the yield at
the system granularity. Motivated by the observations and results,
found at the circuit level, statistical analyses are performed,
and solutions are proposed, at the system level of abstraction, to
reduce the impact of the variations and increase the parametric
yield.
At the circuit level, the impact of the supply and threshold
voltage variations on the parametric yield is discussed. Here, a
design centering methodology is proposed to maximize the
parametric yield and optimize the power-performance trade-off
under variations. In addition, the scaling trend in the yield loss
is studied. Also, some considerations for design centering in the
current and future CMOS technologies are explored.
The investigation, at the circuit level, suggests that the
operating temperature significantly affects the parametric yield.
In addition, the yield is very sensitive to the magnitude of the
variations in supply and threshold voltage. Therefore, the spatial
variations in process and environmental variations make it
necessary to analyze the yield at a higher granularity. Here,
temperature and voltage variations are mapped across the chip to
accurately estimate the yield loss at the system level.
At the system level, initially the impact of process-induced
temperature variations on the power grid design is analyzed. Also,
an efficient verification method is provided that ensures the
robustness of the power grid in the presence of variations. Then,
a statistical analysis of the timing yield is conducted, by taking
into account both the process and environmental variations. By
considering the statistical profile of the temperature and supply
voltage, the process variations are mapped to the delay variations
across a die. This ensures an accurate estimation of the timing
yield. In addition, a method is proposed to accurately estimate
the power yield considering process-induced temperature and supply
voltage variations. This helps check the robustness of the
circuits early in the design process.
Lastly, design solutions are presented to reduce the power
consumption and increase the timing yield under the variations. In
the first solution, a guideline for floorplaning optimization in
the presence of temperature variations is offered. Non-uniformity
in the thermal profiles of integrated circuits is an issue that
impacts the parametric yield and threatens chip reliability.
Therefore, the correlation between the total power consumption and
the temperature variations across a chip is examined. As a result,
floorplanning guidelines are proposed that uses the correlation to
efficiently optimize the chip's total power and takes into account
the thermal uniformity.
The second design solution provides an optimization methodology
for assigning the power supply pads across the chip for maximizing
the timing yield. A mixed-integer nonlinear programming (MINLP)
optimization problem, subject to voltage drop and current
constraint, is efficiently solved to find the optimum number and
location of the pads
Characterization and mitigation of process variation in digital circuits and systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 155-166).Process variation threatens to negate a whole generation of scaling in advanced process technologies due to performance and power spreads of greater than 30-50%. Mitigating this impact requires a thorough understanding of the variation sources, magnitudes and spatial components at the device, circuit and architectural levels. This thesis explores the impacts of variation at each of these levels and evaluates techniques to alleviate them in the context of digital circuits and systems. At the device level, we propose isolation and measurement of variation in the intrinsic threshold voltage of a MOSFET using sub-threshold leakage currents. Analysis of the measured data, from a test-chip implemented on a 0. 18[mu]m CMOS process, indicates that variation in MOSFET threshold voltage is a truly random process dependent only on device dimensions. Further decomposition of the observed variation reveals no systematic within-die variation components nor any spatial correlation. A second test-chip capable of characterizing spatial variation in digital circuits is developed and implemented in a 90nm triple-well CMOS process. Measured variation results show that the within-die component of variation is small at high voltages but is an increasing fraction of the total variation as power-supply voltage decreases. Once again, the data shows no evidence of within-die spatial correlation and only weak systematic components. Evaluation of adaptive body-biasing and voltage scaling as variation mitigation techniques proves voltage scaling is more effective in performance modification with reduced impact to idle power compared to body-biasing.(cont.) Finally, the addition of power-supply voltages in a massively parallel multicore processor is explored to reduce the energy required to cope with process variation. An analytic optimization framework is developed and analyzed; using a custom simulation methodology, total energy of a hypothetical 1K-core processor based on the RAW core is reduced by 6-16% with the addition of only a single voltage. Analysis of yield versus required energy demonstrates that a combination of disabling poor-performing cores and additional power-supply voltages results in an optimal trade-off between performance and energy.by Nigel Anthony Drego.Ph.D
Predicting power scalability in a reconfigurable platform
This thesis focuses on the evolution of digital hardware systems. A reconfigurable platform is proposed and analysed based on thin-body, fully-depleted silicon-on-insulator Schottky-barrier transistors with metal gates and silicide source/drain (TBFDSBSOI). These offer the potential for simplified processing that will allow them to reach ultimate nanoscale gate dimensions. Technology CAD was used to show that the threshold voltage in TBFDSBSOI devices will be controllable by gate potentials that scale down with the channel dimensions while remaining within appropriate gate reliability limits. SPICE simulations determined that the magnitude of the threshold shift predicted by TCAD software would be sufficient to control the logic configuration of a simple, regular array of these TBFDSBSOI transistors as well as to constrain its overall subthreshold power growth. Using these devices, a reconfigurable platform is proposed based on a regular 6-input, 6-output NOR LUT block in which the logic and configuration functions of the array are mapped onto separate gates of the double-gate device. A new analytic model of the relationship between power (P), area (A) and performance (T) has been developed based on a simple VLSI complexity metric of the form ATσ = constant. As σ defines the performance “return” gained as a result of an increase in area, it also represents a bound on the architectural options available in power-scalable digital systems. This analytic model was used to determine that simple computing functions mapped to the reconfigurable platform will exhibit continuous power-area-performance scaling behavior. A number of simple arithmetic circuits were mapped to the array and their delay and subthreshold leakage analysed over a representative range of supply and threshold voltages, thus determining a worse-case range for the device/circuit-level parameters of the model. Finally, an architectural simulation was built in VHDL-AMS. The frequency scaling described by σ, combined with the device/circuit-level parameters predicts the overall power and performance scaling of parallel architectures mapped to the array
- …