370 research outputs found

    Yield-driven power-delay-optimal CMOS full-adder design complying with automotive product specifications of PVT variations and NBTI degradations

    Get PDF
    We present the detailed results of the application of mathematical optimization algorithms to transistor sizing in a full-adder cell design, to obtain the maximum expected fabrication yield. The approach takes into account all the fabrication process parameter variations specified in an industrial PDK, in addition to operating condition range and NBTI aging. The final design solutions present transistor sizing, which depart from intuitive transistor sizing criteria and show dramatic yield improvements, which have been verified by Monte Carlo SPICE analysis

    Parametric Yield of VLSI Systems under Variability: Analysis and Design Solutions

    Get PDF
    Variability has become one of the vital challenges that the designers of integrated circuits encounter. variability becomes increasingly important. Imperfect manufacturing process manifest itself as variations in the design parameters. These variations and those in the operating environment of VLSI circuits result in unexpected changes in the timing, power, and reliability of the circuits. With scaling transistor dimensions, process and environmental variations become significantly important in the modern VLSI design. A smaller feature size means that the physical characteristics of a device are more prone to these unaccounted-for changes. To achieve a robust design, the random and systematic fluctuations in the manufacturing process and the variations in the environmental parameters should be analyzed and the impact on the parametric yield should be addressed. This thesis studies the challenges and comprises solutions for designing robust VLSI systems in the presence of variations. Initially, to get some insight into the system design under variability, the parametric yield is examined for a small circuit. Understanding the impact of variations on the yield at the circuit level is vital to accurately estimate and optimize the yield at the system granularity. Motivated by the observations and results, found at the circuit level, statistical analyses are performed, and solutions are proposed, at the system level of abstraction, to reduce the impact of the variations and increase the parametric yield. At the circuit level, the impact of the supply and threshold voltage variations on the parametric yield is discussed. Here, a design centering methodology is proposed to maximize the parametric yield and optimize the power-performance trade-off under variations. In addition, the scaling trend in the yield loss is studied. Also, some considerations for design centering in the current and future CMOS technologies are explored. The investigation, at the circuit level, suggests that the operating temperature significantly affects the parametric yield. In addition, the yield is very sensitive to the magnitude of the variations in supply and threshold voltage. Therefore, the spatial variations in process and environmental variations make it necessary to analyze the yield at a higher granularity. Here, temperature and voltage variations are mapped across the chip to accurately estimate the yield loss at the system level. At the system level, initially the impact of process-induced temperature variations on the power grid design is analyzed. Also, an efficient verification method is provided that ensures the robustness of the power grid in the presence of variations. Then, a statistical analysis of the timing yield is conducted, by taking into account both the process and environmental variations. By considering the statistical profile of the temperature and supply voltage, the process variations are mapped to the delay variations across a die. This ensures an accurate estimation of the timing yield. In addition, a method is proposed to accurately estimate the power yield considering process-induced temperature and supply voltage variations. This helps check the robustness of the circuits early in the design process. Lastly, design solutions are presented to reduce the power consumption and increase the timing yield under the variations. In the first solution, a guideline for floorplaning optimization in the presence of temperature variations is offered. Non-uniformity in the thermal profiles of integrated circuits is an issue that impacts the parametric yield and threatens chip reliability. Therefore, the correlation between the total power consumption and the temperature variations across a chip is examined. As a result, floorplanning guidelines are proposed that uses the correlation to efficiently optimize the chip's total power and takes into account the thermal uniformity. The second design solution provides an optimization methodology for assigning the power supply pads across the chip for maximizing the timing yield. A mixed-integer nonlinear programming (MINLP) optimization problem, subject to voltage drop and current constraint, is efficiently solved to find the optimum number and location of the pads

    Novel dual-threshold voltage FinFETs for circuit design and optimization

    Get PDF
    A great research effort has been invested on finding alternatives to CMOS that have better process variation and subthreshold leakage. From possible candidates, FinFET is the most compatible with respect to CMOS and it has shown promising leakage and speed performance. This thesis introduces basic characteristics of FinFETs and the effects of FinFET physical parameters on their performance are explained quantitatively. I show how dual- V th independent-gate FinFETs can be fabricated by optimizing their physical parameters. Optimum values for these physical parameters are derived using the physics-based University of Florida SPICE model for double-gate devices, and the optimized FinFETs are simulated and validated using Sentaurus TCAD simulations. Dual-14, FinFETs with independent gates enable series and parallel merge transformations in logic gates, realizing compact low power alternative gates with competitive performance and reduced input capacitance in comparison to conventional FinFET gates. Furthermore, they also enable the design of a new class of compact logic gates with higher expressive power and flexibility than CMOS gates. Synthesis results for 16 benchmark circuits from the ISCAS and OpenSPARC suites indicate that on average at 2GHz and 75°C, the library that contains the novel gates reduces total power and the number of fins by 36% and 37% respectively, over a conventional library that does not have novel gates in the 32nm technology

    Optimization Schemes for Variability-Driven VLSI Design Automation

    Get PDF
    Today's IC design is facing several challenges due to increasing circuit complexity and decreasing feature size, as it pushes to extend Moore's law into nano-scale dimensions. Apart from the challenges that arise directly as a result of feature scaling (e.g., increasing leakage power, reliability issues), imperfections in the manufacturing process have recently turned into a major design hurdle, due to the variations they cause in the device and interconnect parameters from their target values. From an IC design automation perspective, a shift in paradigm, from deterministic to probabilistic, is needed to handle the unpredictable nature of these fabrication variations. In such a probabilistic paradigm, the varying circuit parameters such as leakage power or delay should be accurately modeled, and their correlations due to common sources of variations or physical location on the chip should be well captured. Moreover, variability-driven (probabilistic) design automation needs to efficiently generate a high quality solution. A particular challenge in variability-driven design automation is to define optimality measures among candidate solutions, which allow for inferior solutions to be removed from the solution space thus reducing the run-time complexity. In this dissertation, the superiority probability is introduced as such an optimality measure, and two methods are proposed to compute this probability: an accurate Conditional Monte Carlo simulation method, and an efficient moment-matching approximation method. The effectiveness of using the superiority probability is shown in the context of two important design automation applications: 1) the buffer insertion problem, 2) the dual-Vth leakage optimization problem. Another important task in variability-driven design automation is to develop optimization techniques that can provably generate the optimal solution in an efficient way. In this dissertation, the application of the gate sizing problem is explored to optimally reduce the loss due to fabrication variations in the presence of a timing constraint. The presented formulation, in contrast with the existing variability-driven approaches which are all based on heuristics, is provably optimal. Moreover, unlike existing approaches, it is independent of any assumption on the source and nature of variations

    Subthreshold and gate leakage current analysis and reduction in VLSI circuits

    Get PDF
    CMOS technology has scaled aggressively over the past few decades in an effort to enhance functionality, speed and packing density per chip. As the feature sizes are scaling down to sub-100nm regime, leakage power is increasing significantly and is becoming the dominant component of the total power dissipation. Major contributors to the total leakage current in deep submicron regime are subthreshold and gate tunneling leakage currents. The leakage reduction techniques developed so far were mostly devoted to reducing subthreshold leakage. However, at sub-65nm feature sizes, gate leakage current grows faster and is expected to surpass subthreshold leakage current. In this work, an extensive analysis of the circuit level characteristics of subthreshold and gate leakage currents is performed at 45nm and 32nm feature sizes. The analysis provides several key observations on the interdependency of gate and subthreshold leakage currents. Based on these observations, a new leakage reduction technique is proposed that optimizes both the leakage currents. This technique identifies minimum leakage vectors for a given circuit based on the number of transistors in OFF state and their position in the stack. The effectiveness of the proposed technique is compared to most of the mainstream leakage reduction techniques by implementing them on ISCAS89 benchmark circuits. The proposed leakage reduction technique proved to be more effective in reducing gate leakage current than subthreshold leakage current. However, when combined with dual-threshold and variable-threshold CMOS techniques, substantial subthreshold leakage current reduction was also achieved. A total savings of 53% for subthreshold leakage current and 26% for gate leakage current are reported

    Robust Optimization of Nanometer SRAM Designs

    Get PDF
    Technology scaling has been the most obvious choice of designers and chip manufacturing companies to improve the performance of analog and digital circuits. With the ever shrinking technological node, process variations can no longer be ignored and play a significant role in determining the performance of nanoscaled devices. By choosing a worst case design methodology, circuit designers have been very munificent with the design parameters chosen, often manifesting in pessimistic designs with significant area overheads. Significant work has been done in estimating the impact of intra-die process variations on circuit performance, pertinently, noise margin and standby leakage power, for fixed transistor channel dimensions. However, for an optimal, high yield, SRAM cell design, it is absolutely imperative to analyze the impact of process variations at every design point, especially, since the distribution of process variations is a statistically varying parameter and has an inverse correlation with the area of the MOS transistor. Furthermore, the first order analytical models used for optimization of SRAM memories are not as accurate and the impact of voltage and its inclusion as an input, along with other design parameters, is often ignored. In this thesis, the performance parameters of a nano-scaled 6-T SRAM cell are modeled as an accurate, yield aware, empirical polynomial predictor, in the presence of intra-die process variations. The estimated empirical models are used in a constrained non-linear, robust optimization framework to design an SRAM cell, for a 45 nm CMOS technology, having optimal performance, according to bounds specified for the circuit performance parameters, with the objective of minimizing on-chip area. This statistically aware technique provides a more realistic design methodology to study the trade off between performance parameters of the SRAM. Furthermore, a dual optimization approach is followed by considering SRAM power supply and wordline voltages as additional input parameters, to simultaneously tune the design parameters, ensuring a high yield and considerable area reduction. In addition, the cell level optimization framework is extended to the system level optimization of caches, under both cell level and system level performance constraints

    Adaptive Integrated Circuit Design for Variation Resilience and Security

    Get PDF
    The past few decades witness the burgeoning development of integrated circuit in terms of process technology scaling. Along with the tremendous benefits coming from the scaling, challenges are also presented in various stages. During the design time, the complexity of developing a circuit with millions to billions of smaller size transistors is extended after the variations are taken into account. The difficulty of analyzing these nondeterministic properties makes the allocation scheme of redundant resource hardly work in a cost-efficient way. Besides fabrication variations, analog circuits are suffered from severe performance degradations owing to their physical attributes which are vulnerable to aging effects. As such, the post-silicon calibration approach gains increasing attentions to compensate the performance mismatch. For the user-end applications, additional system failures result from the pirated and counterfeited devices provided by the untrusted semiconductor supply chain. Again analog circuits show their weakness to this threat due to the shortage of piracy avoidance techniques. In this dissertation, we propose three adaptive integrated circuit designs to overcome these challenges respectively. The first one investigates the variability-aware gate implementation with the consideration of the overhead control of adaptivity assignment. This design improves the variation resilience typically for digital circuits while optimizing the power consumption and timing yield. The second design is implemented as a self-validation system for the calibration of diverse analog circuits. The system is completely integrated on chip to enhance the convenience without external assistance. In the last design, a classic analog component is further studied to establish the configurable locking mechanism for analog circuits. The use of Satisfiability Modulo Theories addresses the difficulty of searching the unique unlocking pattern of non-Boolean variables

    Resource Management Algorithms for Computing Hardware Design and Operations: From Circuits to Systems

    Get PDF
    The complexity of computation hardware has increased at an unprecedented rate for the last few decades. On the computer chip level, we have entered the era of multi/many-core processors made of billions of transistors. With transistor budget of this scale, many functions are integrated into a single chip. As such, chips today consist of many heterogeneous cores with intensive interaction among these cores. On the circuit level, with the end of Dennard scaling, continuously shrinking process technology has imposed a grand challenge on power density. The variation of circuit further exacerbated the problem by consuming a substantial time margin. On the system level, the rise of Warehouse Scale Computers and Data Centers have put resource management into new perspective. The ability of dynamically provision computation resource in these gigantic systems is crucial to their performance. In this thesis, three different resource management algorithms are discussed. The first algorithm assigns adaptivity resource to circuit blocks with a constraint on the overhead. The adaptivity improves resilience of the circuit to variation in a cost-effective way. The second algorithm manages the link bandwidth resource in application specific Networks-on-Chip. Quality-of-Service is guaranteed for time-critical traffic in the algorithm with an emphasis on power. The third algorithm manages the computation resource of the data center with precaution on the ill states of the system. Q-learning is employed to meet the dynamic nature of the system and Linear Temporal Logic is leveraged as a tool to describe temporal constraints. All three algorithms are evaluated by various experiments. The experimental results are compared to several previous work and show the advantage of our methods

    ULTRA ENERGY-EFFICIENT SUB-/NEAR-THRESHOLD COMPUTING: PLATFORM AND METHODOLOGY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    An Ultra-Low-Energy, Variation-Tolerant FPGA Architecture Using Component-Specific Mapping

    Get PDF
    As feature sizes scale toward atomic limits, parameter variation continues to increase, leading to increased margins in both delay and energy. Parameter variation both slows down devices and causes devices to fail. For applications that require high performance, the possibility of very slow devices on critical paths forces designers to reduce clock speed in order to meet timing. For an important and emerging class of applications that target energy-minimal operation at the cost of delay, the impact of variation-induced defects at very low voltages mandates the sizing up of transistors and operation at higher voltages to maintain functionality. With post-fabrication configurability, FPGAs have the opportunity to self-measure the impact of variation, determining the speed and functionality of each individual resource. Given that information, a delay-aware router can use slow devices on non-critical paths, fast devices on critical paths, and avoid known defects. By mapping each component individually and customizing designs to a component's unique physical characteristics, we demonstrate that we can eliminate delay margins and reduce energy margins caused by variation. To quantify the potential benefit we might gain from component-specific mapping, we first measure the margins associated with parameter variation, and then focus primarily on the energy benefits of FPGA delay-aware routing over a wide range of predictive technologies (45 nm--12 nm) for the Toronto20 benchmark set. We show that relative to delay-oblivious routing, delay-aware routing without any significant optimizations can reduce minimum energy/operation by 1.72x at 22 nm. We demonstrate how to construct an FPGA architecture specifically tailored to further increase the minimum energy savings of component-specific mapping by using the following techniques: power gating, gate sizing, interconnect sparing, and LUT remapping. With all optimizations considered we show a minimum energy/operation savings of 2.66x at 22 nm, or 1.68--2.95x when considered across 45--12 nm. As there are many challenges to measuring resource delays and mapping per chip, we discuss methods that may make component-specific mapping more practical. We demonstrate that a simpler, defect-aware routing achieves 70% of the energy savings of delay-aware routing. Finally, we show that without variation tolerance, scaling from 16 nm to 12 nm results in a net increase in minimum energy/operation; component-specific mapping, however, can extend minimum energy/operation scaling to 12 nm and possibly beyond.</p
    corecore