49 research outputs found

    Electrical Design for Manufacturability Solutions: Fast Systematic Variation Analysis and Design Enhancement Techniques

    Get PDF
    The primary objectives in this research are to develop computer-aided design (CAD) tools for Design for Manufacturability (DFM) solutions that enable designers to conduct more rapid and more accurate systematic variation analysis, with different design enhancement techniques. Four main CAD tools are developed throughout my thesis. The first CAD tool facilitates a quantitative study of the impact of systematic variations for different circuits' electrical and geometrical behavior. This is accomplished by automatically performing an extensive analysis of different process variations (lithography and stress) and their dependency on the design context. Such a tool helps to explore and evaluate the systematic variation impact on any type of design. Secondly, solutions in the industry focus on the "design and then fix philosophy", or "fix during design philosophy", whereas the next CAD tool involves the "fix before design philosophy". Here, the standard cell library is characterized in different design contexts, different resolution enhancement techniques, and different process conditions, generating a fully DFM-aware standard cell library using a newly developed methodology that dramatically reduce the required number of silicon simulations. Several experiments are conducted on 65nm and 45nm designs, and demonstrate more robust and manufacturable designs that can be implemented by using the DFM-aware standard cell library. Thirdly, a novel electrical-aware hotspot detection solution is developed by using a device parameter-based matching technique since the state-of-the-art hotspot detection solutions are all geometrical based. This CAD tool proposes a new philosophy by detecting yield limiters, also known as hotspots, through the model parameters of the device, presented in the SPICE netlist. This novel hotspot detection methodology is tested and delivers extraordinary fast and accurate results. Finally, the existing DFM solutions, mainly address the digital designs. Process variations play an increasingly important role in the success of analog circuits. Knowledge of the parameter variances and their contribution patterns is crucial for a successful design process. This information is valuable to find solutions for many problems in design, design automation, testing, and fault tolerance. The fourth CAD solution, proposed in this thesis, introduces a variability-aware DFM solution that detects, analyze, and automatically correct hotspots for analog circuits

    Analog design for manufacturability: lithography-aware analog layout retargeting

    Get PDF
    As transistor sizes shrink over time in the advanced nanometer technologies, lithography effects have become a dominant contributor of integrated circuit (IC) yield degradation. Random manufacturing variations, such as photolithographic defect or spot defect, may cause fatal functional failures, while systematic process variations, such as dose fluctuation and defocus, can result in wafer pattern distortions and in turn ruin circuit performance. This dissertation is focused on yield optimization at the circuit design stage or so-called design for manufacturability (DFM) with respect to analog ICs, which has not yet been sufficiently addressed by traditional DFM solutions. On top of a graph-based analog layout retargeting framework, in this dissertation the photolithographic defects and lithography process variations are alleviated by geometrical layout manipulation operations including wire widening, wire shifting, process variation band (PV-band) shifting, and optical proximity correction (OPC). The ultimate objective of this research is to develop efficient algorithms and methodologies in order to achieve lithography-robust analog IC layout design without circuit performance degradation

    Characterization and analysis of process variability in deeply-scaled MOSFETs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 137-147).Variability characterization and analysis in advanced technologies are needed to ensure robust performance as well as improved process capability. This thesis presents a framework for device variability characterization and analysis. Test structure and test circuit design, identification of significant effects in design of experiments, and decomposition approaches to quantify variation and its sources are explored. Two examples of transistor variability characterization are discussed: contact plug resistance variation within the context of a transistor, and AC, or short time-scale, variation in transistors. Results show that, with careful test structure and circuit design and ample measurement data, interesting trends can be observed. Among these trends are (1) a distinct within-die spatial signature of contact plug resistance and (2) a picosecond-accuracy delay measurement on transistors which reveals the presence of excessive external parasitic gate resistance. Measurement results obtained from these test vehicles can aid in both the understanding of variations in the fabrication process and in efforts to model variations in transistor behavior.by Karthik Balakrishnan.Ph.D

    Managing Variability in VLSI Circuits.

    Full text link
    Over the last two decades, Design for Manufacturing (DFM) has emerged as an essential field within the semiconductor industry. The main objective of DFM is to reduce and, if possible, eliminate variability in integrated circuits (ICs). Numerous techniques for managing variation have emerged throughout IC design: manufacturers design instruments with minute tolerances, process engineers calibrate and characterize a given process throughout its lifetime, and IC designers strive to model and characterize variability within their devices, libraries, and circuits. This dissertation focuses on the last of these three techniques and presents material relevant to managing variability within IC design. Since characterization and modeling are essential to the analysis and reduction of variation in modern-day designs, this dissertation begins by studying various correlation models used within Statistical Static Timing Analysis (SSTA). In the end, the study shows that using complex correlation models does not necessarily result in significant error reduction within SSTA, and that simple models (which only include die-to-die and random variation) can therefore be used to achieve similar accuracy with reduced overhead and run-time. Next, the variation models, themselves, are explored and a new critical dimension (CD) model is proposed which reduces standard deviation error in SSTA by ~3X. Finally, the focus changes from the timing analysis level and moves lower in the design hierarchy to the libraries and devices that comprise the backbone of IC design. The final three chapters study mechanical stress enhancement and discuss how to fully exploit the layout dependencies of mechanically stressed silicon. The first of these three chapters presents an optimization scheme that uses the layout dependencies of stress in conjunction with dual-threshold-voltage (Vth) assignment to decrease leakage power consumption by ~24%. Next, the second of the three chapters proposes a new standard cell library design methodology, called “STEEL.” STEEL provides average delay improvements of 11% over equivalent single-Vth implementations, while consuming 2.5X less leakage than the dual-Vth alternative. Finally, the stress enhanced studies (and this document) are concluded by a new optimization scheme that combines stress enhancement with gate length biasing to achieve 2.9X leakage power savings in IC designs without modifying Vth.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75947/1/btcline_1.pd

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    Architectural level delay and leakage power modelling of manufacturing process variation

    Get PDF
    PhD ThesisThe effect of manufacturing process variations has become a major issue regarding the estimation of circuit delay and power dissipation, and will gain more importance in the future as device scaling continues in order to satisfy market place demands for circuits with greater performance and functionality per unit area. Statistical modelling and analysis approaches have been widely used to reflect the effects of a variety of variational process parameters on system performance factor which will be described as probability density functions (PDFs). At present most of the investigations into statistical models has been limited to small circuits such as a logic gate. However, the massive size of present day electronic systems precludes the use of design techniques which consider a system to comprise these basic gates, as this level of design is very inefficient and error prone. This thesis proposes a methodology to bring the effects of process variation from transistor level up to architectural level in terms of circuit delay and leakage power dissipation. Using a first order canonical model and statistical analysis approach, a statistical cell library has been built which comprises not only the basic gate cell models, but also more complex functional blocks such as registers, FIFOs, counters, ALUs etc. Furthermore, other sensitive factors to the overall system performance, such as input signal slope, output load capacitance, different signal switching cases and transition types are also taken into account for each cell in the library, which makes it adaptive to an incremental circuit design. The proposed methodology enables an efficient analysis of process variation effects on system performance with significantly reduced computation time compared to the Monte Carlo simulation approach. As a demonstration vehicle for this technique, the delay and leakage power distributions of a 2-stage asynchronous micropipeline circuit has been simulated using this cell library. The experimental results show that the proposed method can predict the delay and leakage power distribution with less than 5% error and at least 50,000 times faster computation time compare to 5000-sample SPICE based Monte Carlo simulation. The methodology presented here for modelling process variability plays a significant role in Design for Manufacturability (DFM) by quantifying the direct impact of process variations on system performance. The advantages of being able to undertake this analysis at a high level of abstraction and thus early in the design cycle are two fold. First, if the predicted effects of process variation render the circuit performance to be outwith specification, design modifications can be readily incorporated to rectify the situation. Second, knowing what the acceptable limits of process variation are to maintain design performance within its specification, informed choices can be made regarding the implementation technology and manufacturer selected to fabricate the design

    Design of a process monitor and of peripheral circuits enabling the characterisation of CMOS 45nm Ultra Low Power and Litho Friendly optimised standard cells

    Get PDF
    L’evoluzione della tecnologia CMOS è caratterizzata dallo scaling delle dimensioni dei dispositivi e dalla riduzione del consumo di potenza. Dal momento che le difficoltà di realizzazione aumentano al diminuire delle dimensioni, nei nodi tecnologici più recenti la velocità del processo di scaling sta diminuendo. Uno dei maggiori problemi causati dalla riduzione delle dimensioni dei dispositivi è la variabilità del processo di fabbricazione. L’obiettivo di questo progetto è quello di ridurre gli effetti che la variabilità del processo di realizzazione nel nodo tecnologico CMOS 45 nm ha sulle prestazioni della logica digitale, grazie a metodi di design non convenzionali. In questo progetto è stato realizzato un testchip per studiare e quantificare i vantaggi, in termini di prestazioni, ottenuti tramite la progettazione di librerie standard-like ottimizzate secondo canoni di litho-friendliness (LF) e ultra low power (ULP). Le standard cells LF utilizzano layout estremamente regolari. Le standard cells ULP sono progettate per operare con tensioni di alimentazioni notevolmente ridotte. Il fine principale del testchip sta nell’ottenere una panoramica della variabilità locale e globale di parametri significativi nella progettazione digitale: ad esempio la frequenza di lavoro e il consumo di potenza. Inoltre, nel testchip sono stati realizzati alcuni circuiti originali per il monitoraggio della qualità del processo di fabbricazione. The evolution of the CMOS technology is characterized by the scaling of transistors size and by the reduction of their power dissipation. In the last technology nodes the speed of the scaling process is decreasing, since the complexity of the technology increases with its size reduction. One of the main issues caused by the shrinking of the transistor size is the variability of the fabrication process. The target of this project is to reduce the effects of the variability of the realisation process in a CMOS 45 nm technology node in digital circuits performances, using unconventional design methods. A testchip is realised in this project to investigate and to quantify the improvement of the circuit performances obtained through the design of dedicated litho-friendly (LF) and of the Ultra Low Power (ULP) standard-like libraries. The LF standard cells libraries are optimised for lithography using ultra regular layout styles. The ULP standard cells library is optimised to operate at extremely low supply voltage. The main aim of the testchip is to get insight into the local and the global variability of relevant parameters for digital design, such as operating frequency and power consumption. In this testchip some structures are also included, to develop some innovative circuits that should help to monitor the quality of the technology process

    DFM Techniques for the Detection and Mitigation of Hotspots in Nanometer Technology

    Get PDF
    With the continuous scaling down of dimensions in advanced technology nodes, process variations are getting worse for each new node. Process variations have a large influence on the quality and yield of the designed and manufactured circuits. There is a growing need for fast and efficient techniques to characterize and mitigate the effects of different sources of process variations on the design's performance and yield. In this thesis we have studied the various sources of systematic process variations and their effects on the circuit, and the various methodologies to combat systematic process variation in the design space. We developed abstract and accurate process variability models, that would model systematic intra-die variations. The models convert the variation in process into variation in electrical parameters of devices and hence variation in circuit performance (timing and leakage) without the need for circuit simulation. And as the analysis and mitigation techniques are studied in different levels of the design ow, we proposed a flow for combating the systematic process variation in nano-meter CMOS technology. By calculating the effects of variability on the electrical performance of circuits we can gauge the importance of the accurate analysis and model-driven corrections. We presented an automated framework that allows the integration of circuit analysis with process variability modeling to optimize the computer intense process simulation steps and optimize the usage of variation mitigation techniques. And we used the results obtained from using this framework to develop a relation between layout regularity and resilience of the devices to process variation. We used these findings to develop a novel technique for fast detection of critical failures (hotspots) resulting from process variation. We showed that our approach is superior to other published techniques in both accuracy and predictability. Finally, we presented an automated method for fixing the lithography hotspots. Our method showed success rate of 99% in fixing hotspots

    Manufacturability Aware Design.

    Full text link
    The aim of this work is to provide solutions that optimize the tradeoffs among design, manufacturability, and cost of ownership posed by technology scaling and sub-wavelength lithography. These solutions may take the form of robust circuit designs, cost-effective resolution technologies, accurate modeling considering process variations, and design rules assessment. We first establish a framework for assessing the impact of process variation on circuit performance, product value and return on investment on alternative processes. Key features include comprehensive modeling and different handling on die-to-die and within-die variation, accurate models of correlations of variation, realistic and quantified projection to future process nodes, and performance sensitivity analysis to improved control of individual device parameter and variation sources. Then we describe a novel minimum cost of correction methodology which determines the level of correction of each layout feature such that the prescribed parametric yield is attained with minimum RET (Resolution Enhancement Technology) cost. This timing driven OPC (Optical Proximity Correction) insertion flow uses a mathematical programming based slack budgeting algorithm to determine OPC level for all polysilicon gate geometries. Designs adopting this methodology show up to 20% MEBES (Manufacturing Electron Beam Exposure System) data volume reduction and 39% OPC runtime improvement. When the systematic correction residual errors become unavoidable, we analyze their impact on a state-of-art microprocessor's speedpath skew. A platform is created for diagnosing and improving OPC quality on gates with specific functionality such as critical gates or matching transistors. Significant changes in full-chip timing analysis indicate the necessity of a post-OPC performance verification design flow. Finally, we quantify the performance, manufacturability and mask cost impact of globally applying several common restrictive design rules. Novel approaches such as locally adapting FDRs (flexible design rules) based on image parameters range, and DRC Plus (preferred design rule enforcement with 2D pattern matching) are also described.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57676/2/jiey_1.pd

    Characterization and mitigation of process variation in digital circuits and systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 155-166).Process variation threatens to negate a whole generation of scaling in advanced process technologies due to performance and power spreads of greater than 30-50%. Mitigating this impact requires a thorough understanding of the variation sources, magnitudes and spatial components at the device, circuit and architectural levels. This thesis explores the impacts of variation at each of these levels and evaluates techniques to alleviate them in the context of digital circuits and systems. At the device level, we propose isolation and measurement of variation in the intrinsic threshold voltage of a MOSFET using sub-threshold leakage currents. Analysis of the measured data, from a test-chip implemented on a 0. 18[mu]m CMOS process, indicates that variation in MOSFET threshold voltage is a truly random process dependent only on device dimensions. Further decomposition of the observed variation reveals no systematic within-die variation components nor any spatial correlation. A second test-chip capable of characterizing spatial variation in digital circuits is developed and implemented in a 90nm triple-well CMOS process. Measured variation results show that the within-die component of variation is small at high voltages but is an increasing fraction of the total variation as power-supply voltage decreases. Once again, the data shows no evidence of within-die spatial correlation and only weak systematic components. Evaluation of adaptive body-biasing and voltage scaling as variation mitigation techniques proves voltage scaling is more effective in performance modification with reduced impact to idle power compared to body-biasing.(cont.) Finally, the addition of power-supply voltages in a massively parallel multicore processor is explored to reduce the energy required to cope with process variation. An analytic optimization framework is developed and analyzed; using a custom simulation methodology, total energy of a hypothetical 1K-core processor based on the RAW core is reduced by 6-16% with the addition of only a single voltage. Analysis of yield versus required energy demonstrates that a combination of disabling poor-performing cores and additional power-supply voltages results in an optimal trade-off between performance and energy.by Nigel Anthony Drego.Ph.D
    corecore