444 research outputs found

    EDA Solutions for Double Patterning Lithography

    Get PDF
    Expanding the optical lithography to 32-nm node and beyond is impossible using existing single exposure systems. As such, double patterning lithography (DPL) is the most promising option to generate the required lithography resolution, where the target layout is printed with two separate imaging processes. Among different DPL techniques litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) methods are the most popular ones, which apply two complete exposure lithography steps and an exposure lithography followed by a chemical imaging process, respectively. To realize double patterning lithography, patterns located within a sub-resolution distance should be assigned to either of the imaging sub-processes, so-called layout decomposition. To achieve the optimal design yield, layout decomposition problem should be solved with respect to characteristics and limitations of the applied DPL method. For example, although patterns can be split between the two sub-masks in the LELE method to generate conflict free masks, this pattern split is not favorable due to its sensitivity to lithography imperfections such as the overlay error. On the other hand, pattern split is forbidden in SADP method because it results in non-resolvable gap failures in the final image. In addition to the functional yield, layout decomposition affects parametric yield of the designs printed by double patterning. To deal with both functional and parametric challenges of DPL in dense and large layouts, EDA solutions for DPL are addressed in this thesis. To this end, we proposed a statistical method to determine the interconnect width and space for the LELE method under the effect of random overlay error. In addition to yield maximization and achieving near-optimal trade-off between different parametric requirements, the proposed method provides valuable insight about the trend of parametric and functional yields in future technology nodes. Next, we focused on self-aligned double patterning and proposed layout design and decomposition methods to provide SADP-compatible layouts and litho-friendly decomposed layouts. Precisely, a grid-based ILP formulation of SADP decomposition was proposed to avoid decomposition conflicts and improve overall printability of layout patterns. To overcome the limited applicability of this ILP-based method to fully-decomposable layouts, a partitioning-based method is also proposed which is faster than the grid-based ILP decomposition method too. Moreover, an A∗-based SADP-aware detailed routing method was proposed which performs detailed routing and layout decomposition simultaneously to avoid litho-limited layout configurations. The proposed router preserves the uniformity of pattern density between the two sub-masks of the SADP process. We finally extended our decomposition method for double patterning to triple patterning and formulated SATP decomposition by integer linear programming. In addition to conventional minimum width and spacing constraints, the proposed decomposition method minimizes the mandrel-trim co-defined edges and maximizes the layout features printed by structural spacers to achieve the minimum pattern distortion. This thesis is one of the very early researches that investigates the concept of litho-friendliness in SADP-aware layout design and decomposition. Provided by experimental results, the proposed methods advance prior state-of-the-art algorithms in various aspects. Precisely, the suggested SADP decomposition methods improve total length of sensitive trim edges, total EPE and overall printability of attempted designs. Additionally, our SADP-detailed routing method provides SADP-decomposable layouts in which trim patterns are highly robust to lithography imperfections. The experimental results for SATP decomposition show that total length of overlay-sensitive layout patterns, total EPE and overall printability of the attempted designs are also improved considerably by the proposed decomposition method. Additionally, the methods in this PhD thesis reveal several insights for the upcoming technology nodes which can be considered for improving the manufacturability of these nodes

    On Regularity and Integrated DFM Metrics

    Get PDF
    Transistor geometries are well into the nanometer regime, keeping with Moore's Law. With this scaling in geometry, problems not significant in the larger geometries have come to the fore. These problems, collectively termed variability, stem from second-order effects due to the small geometries themselves and engineering limitations in creating the small geometries. The engineering obstacles have a few solutions which are yet to be widely adopted due to cost limitations in deploying them. Addressing and mitigating variability due to second-order effects comes largely under the purview of device engineers and to a smaller extent, design practices. Passive layout measures that ease these manufacturing limitations by regularizing the different layout pitches have been explored in the past. However, the question of the best design practice to combat systematic variations is still open. In this work we explore considerations for the regular layout of the exclusive-OR gate, the half-adder and full-adder cells implemented with varying degrees of regularity. Tradeoffs like complete interconnect unidirectionality, and the inevitable introduction of vias are qualitatively analyzed and some factors affecting the analysis are presented. Finally, results from the Calibre Critical Feature Analysis (CFA) of the cells are used to evaluate the qualitative analysis

    DFM Techniques for the Detection and Mitigation of Hotspots in Nanometer Technology

    Get PDF
    With the continuous scaling down of dimensions in advanced technology nodes, process variations are getting worse for each new node. Process variations have a large influence on the quality and yield of the designed and manufactured circuits. There is a growing need for fast and efficient techniques to characterize and mitigate the effects of different sources of process variations on the design's performance and yield. In this thesis we have studied the various sources of systematic process variations and their effects on the circuit, and the various methodologies to combat systematic process variation in the design space. We developed abstract and accurate process variability models, that would model systematic intra-die variations. The models convert the variation in process into variation in electrical parameters of devices and hence variation in circuit performance (timing and leakage) without the need for circuit simulation. And as the analysis and mitigation techniques are studied in different levels of the design ow, we proposed a flow for combating the systematic process variation in nano-meter CMOS technology. By calculating the effects of variability on the electrical performance of circuits we can gauge the importance of the accurate analysis and model-driven corrections. We presented an automated framework that allows the integration of circuit analysis with process variability modeling to optimize the computer intense process simulation steps and optimize the usage of variation mitigation techniques. And we used the results obtained from using this framework to develop a relation between layout regularity and resilience of the devices to process variation. We used these findings to develop a novel technique for fast detection of critical failures (hotspots) resulting from process variation. We showed that our approach is superior to other published techniques in both accuracy and predictability. Finally, we presented an automated method for fixing the lithography hotspots. Our method showed success rate of 99% in fixing hotspots

    Ultra-Stretchable Interconnects for High-Density Stretchable Electronics

    Full text link
    The exciting field of stretchable electronics (SE) promises numerous novel applications, particularly in-body and medical diagnostics devices. However, future advanced SE miniature devices will require high-density, extremely stretchable interconnects with micron-scale footprints, which calls for proven standardized (complementary metal-oxide semiconductor (CMOS)-type) process recipes using bulk integrated circuit (IC) microfabrication tools and fine-pitch photolithography patterning. Here, we address this combined challenge of microfabrication with extreme stretchability for high-density SE devices by introducing CMOS-enabled, free-standing, miniaturized interconnect structures that fully exploit their 3D kinematic freedom through an interplay of buckling, torsion, and bending to maximize stretchability. Integration with standard CMOS-type batch processing is assured by utilizing the Flex-to-Rigid (F2R) post-processing technology to make the back-end-of-line interconnect structures free-standing, thus enabling the routine microfabrication of highly-stretchable interconnects. The performance and reproducibility of these free-standing structures is promising: an elastic stretch beyond 2000% and ultimate (plastic) stretch beyond 3000%, with 10 million cycles at 1000% stretch with <1% resistance change. This generic technology provides a new route to exciting highly-stretchable miniature devices.Comment: 13 pages, 5 figure, journal publicatio

    Regular cell design approach considering lithography-induced process variations

    Get PDF
    The deployment delays for EUVL, forces IC design to continue using 193nm wavelength lithography with innovative and costly techniques in order to faithfully print sub-wavelength features and combat lithography induced process variations. The effect of the lithography gap in current and upcoming technologies is to cause severe distortions due to optical diffraction in the printed patterns and thus degrading manufacturing yield. Therefore, a paradigm shift in layout design is mandatory towards more regular litho-friendly cell designs in order to improve line pattern resolution. However, it is still unclear the amount of layout regularity that can be introduced and how to measure the benefits and weaknesses of regular layouts. This dissertation is focused on searching the degree of layout regularity necessary to combat lithography variability and outperform the layout quality of a design. The main contributions that have been addressed to accomplish this objective are: (1) the definition of several layout design guidelines to mitigate lithography variability; (2) the proposal of a parametric yield estimation model to evaluate the lithography impact on layout design; (3) the development of a global Layout Quality Metric (LQM) including a Regularity Metric (RM) to capture the degree of layout regularity of a layout implementation and; (4) the creation of different layout architectures exploiting the benefits of layout regularity to outperform line-pattern resolution, referred as Adaptive Lithography Aware Regular Cell Designs (ALARCs). The first part of this thesis provides several regular layout design guidelines derived from lithography simulations so that several important lithography related variation sources are minimized. Moreover, a design level methodology, referred as gate biasing, is proposed to overcome systematic layout dependent variations, across-field variations and the non-rectilinear gate effect (NRG) applied to regular fabrics by properly configuring the drawn transistor channel length. The second part of this dissertation proposes a lithography yield estimation model to predict the amount of lithography distortion expected in a printed layout due to lithography hotspots with a reduced set of lithography simulations. An efficient lithography hotspot framework to identify the different layout pattern configurations, simplify them to ease the pattern analysis and classify them according to the lithography degradation predicted using lithography simulations is presented. The yield model is calibrated with delay measurements of a reduced set of identical test circuits implemented in a CMOS 40nm technology and thus actual silicon data is utilized to obtain a more realistic yield estimation. The third part of this thesis presents a configurable Layout Quality Metric (LQM) that considering several layout aspects provides a global evaluation of a layout design with a single score. The LQM can be leveraged by assigning different weights to each evaluation metric or by modifying the parameters under analysis. The LQM is here configured following two different set of partial metrics. Note that the LQM provides a regularity metric (RM) in order to capture the degree of layout regularity applied in a layout design. Lastly, this thesis presents different ALARC designs for a 40nm technology using different degrees of layout regularity and different area overheads. The quality of the gridded regular templates is demonstrated by automatically creating a library containing 266 cells including combinational and sequential cells and synthesizing several ITC'99 benchmark circuits. Note that the regular cell libraries only presents a 9\% area penalty compared to the 2D standard cell designs used for comparison and thus providing area competitive designs. The layout evaluation of benchmark circuits considering the LQM shows that regular layouts can outperform other 2D standard cell designs depending on the layout implementation.Los continuos retrasos en la implementación de la EUVL, fuerzan que el diseño de IC se realice mediante litografía de longitud de onda de 193 nm con innovadoras y costosas técnicas para poder combatir variaciones de proceso de litografía. La gran diferencia entre la longitud de onda y el tamaño de los patrones causa severas distorsiones debido a la difracción óptica en los patrones impresos y por lo tanto degradando el yield. En consecuencia, es necesario realizar un cambio en el diseño de layouts hacia diseños más regulares para poder mejorar la resolución de los patrones. Sin embargo, todavía no está claro el grado de regularidad que se debe introducir y como medir los beneficios y los perjuicios de los diseños regulares. El objetivo de esta tesis es buscar el grado de regularidad necesario para combatir las variaciones de litografía y mejorar la calidad del layout de un diseño. Las principales contribuciones para conseguirlo son: (1) la definición de diversas reglas de diseño de layout para mitigar las variaciones de litografía; (2) la propuesta de un modelo para estimar el yield paramétrico y así evaluar el impacto de la litografía en el diseño de layout; (3) el diseño de una métrica para analizar la calidad de un layout (LQM) incluyendo una métrica para capturar el grado de regularidad de un diseño (RM) y; (4) la creación de diferentes tipos de layout explotando los beneficios de la regularidad, referidos como Adaptative Lithography Aware Regular Cell Designs (ALARCs). La primera parte de la tesis, propone las diversas reglas de diseño para layouts regulares derivadas de simulaciones de litografía de tal manera que las fuentes de variación de litografía son minimizadas. Además, se propone una metodología de diseño para layouts regulares, referida como "gate biasing" para contrarrestar las variaciones sistemáticas dependientes del layout, las variaciones en la ventana de proceso del sistema litográfico y el efecto de puerta no rectilínea para configurar la longitud del canal del transistor correctamente. La segunda parte de la tesis, detalla el modelo de estimación del yield de litografía para predecir mediante un número reducido de simulaciones de litografía la cantidad de distorsión que se espera en un layout impreso debida a "hotspots". Se propone una eficiente metodología que identifica los distintos patrones de un layout, los simplifica para facilitar el análisis de los patrones y los clasifica en relación a la degradación predecida mediante simulaciones de litografía. El modelo de yield se calibra utilizando medidas de tiempo de un número reducido de idénticos circuitos de test implementados en una tecnología CMOS de 40nm y de esta manera, se utilizan datos de silicio para obtener una estimación realista del yield. La tercera parte de este trabajo, presenta una métrica para medir la calidad del layout (LQM), que considera diversos aspectos para dar una evaluación global de un diseño mediante un único valor. La LQM puede ajustarse mediante la asignación de diferentes pesos para cada métrica de evaluación o modificando los parámetros analizados. La LQM se configura mediante dos conjuntos de medidas diferentes. Además, ésta incluye una métrica de regularidad (RM) para capturar el grado de regularidad que se aplica en un diseño. Finalmente, esta disertación presenta los distintos diseños ALARC para una tecnología de 40nm utilizando diversos grados de regularidad y diferentes impactos en área. La calidad de estos diseños se demuestra creando automáticamente una librería de 266 celdas incluyendo celdas combinacionales y secuenciales y, sintetizando diversos circuitos ITC'99. Las librerías regulares solo presentan un 9% de impacto en área comparado con diseños de celdas estándar 2D y por tanto proponiendo diseños competitivos en área. La evaluación de los circuitos considerando la LQM muestra que los diseños regulares pueden mejorar otros diseños 2D dependiendo de la implementación del layout

    Design Techniques for Lithography-Friendly Nanometer CMOS Integrated Circuits

    Get PDF
    The Integrated Circuits industry has been a major driver of the outstanding changes and improvements in the modern day technology and life style that we are observing in our day to day life. The continuous scaling of CMOS technology has been one of the major challenges and success stories. However, as the CMOS technology advances deeply into the deep sub-micron technology nodes, the whole industry (both manufacturing and design) is starting to face new challenges. One major challenge is the control of the variation in device parameters. Lithography variations result from the industry incapability to come up with new light sources with a smaller wavelength than ArF source (193 nm wavelength). In this research, we develop better understanding of the photo-lithography variations and their effect on how the design gets patterned. We investigate the state-of-the-art mask correction and design manipulation techniques. We are focusing in our study on the different Optical Proximity Correction (OPC) and design retargeting techniques to assess how we can improve both the functional and parametric yield. Our goal is to achieve a fast and accurate Model Based Re-Targeting (MBRT) technique that can achieve a better functional yield during manufacturing by establishing the techniques to produce more lithography-friendly targets. Moreover, it can be easily integrated into a fab's PDK (due to its relatively high speed) to feedback the exact final printing on wafer to the designers during the early design phase. In this thesis, we focus on two main topics. First is the development of a fast technique that can predict the final mask shape with reasonable accuracy. This is our proposed Model-based Initial Bias (MIB) methodology, in which we develop the full methodology for creating compact models that can predict the perturbation needed to get to an OPC initial condition that is much closer to the final solution. This is very useful in general in the OPC domain, where it can save almost 50% of the OPC runtime. We also use MIB in our proposed Model-Based Retargeting (MBRT) flow to accurately compute lithography hot-spots location and severity. Second, we develop the fast model-based retargeting methodology that is capable of fixing lithography hot spots and improving the functional yield. Moreover, in this methodology we introduce to the first time the concept of distributed retargeting. In distributed MBRT, not only the design portion that is suffering from the hot-spot is moving to get it fixed but also the surrounding designs and design fragments also contribute to the hot-spot fix. Our proposed model-based retargeting methodology also includes the multiple-patterning awareness as well as the electrical-connectivity-awareness (via-awareness). We used Mentor Graphics Calibre Litho-API c-based programing to develop all of the methodologies we explain in this thesis and tested it on 20nm and 10nm nodes

    Probing single electrons across 300 mm spin qubit wafers

    Full text link
    Building a fault-tolerant quantum computer will require vast numbers of physical qubits. For qubit technologies based on solid state electronic devices, integrating millions of qubits in a single processor will require device fabrication to reach a scale comparable to that of the modern CMOS industry. Equally importantly, the scale of cryogenic device testing must keep pace to enable efficient device screening and to improve statistical metrics like qubit yield and process variation. Spin qubits have shown impressive control fidelities but have historically been challenged by yield and process variation. In this work, we present a testing process using a cryogenic 300 mm wafer prober to collect high-volume data on the performance of industry-manufactured spin qubit devices at 1.6 K. This testing method provides fast feedback to enable optimization of the CMOS-compatible fabrication process, leading to high yield and low process variation. Using this system, we automate measurements of the operating point of spin qubits and probe the transitions of single electrons across full wafers. We analyze the random variation in single-electron operating voltages and find that this fabrication process leads to low levels of disorder at the 300 mm scale. Together these results demonstrate the advances that can be achieved through the application of CMOS industry techniques to the fabrication and measurement of spin qubits.Comment: 15 pages, 4 figures, 7 extended data figure

    A hierarchical optimization engine for nanoelectronic systems using emerging device and interconnect technologies

    Get PDF
    A fast and efficient hierarchical optimization engine was developed to benchmark and optimize various emerging device and interconnect technologies and system-level innovations at the early design stage. As the semiconductor industry approaches sub-20nm technology nodes, both devices and interconnects are facing severe physical challenges. Many novel device and interconnect concepts and system integration techniques are proposed in the past decade to reinforce or even replace the conventional Si CMOS technology and Cu interconnects. To efficiently benchmark and optimize these emerging technologies, a validated system-level design methodology is developed based on the compact models from all hierarchies, starting from the bottom material-level, to the device- and interconnect-level, and to the top system-level models. Multiple design parameters across all hierarchies are co-optimized simultaneously to maximize the overall chip throughput instead of just the intrinsic delay or energy dissipation of the device or interconnect itself. This optimization is performed under various constraints such as the power dissipation, maximum temperature, die size area, power delivery noise, and yield. For the device benchmarking, novel graphen PN junction devices and InAs nanowire FETs are investigated for both high-performance and low-power applications. For the interconnect benchmarking, a novel local interconnect structure and hybrid Al-Cu interconnect architecture are proposed, and emerging multi-layer graphene interconnects are also investigated, and compared with the conventional Cu interconnects. For the system-level analyses, the benefits of the systems implemented with 3D integration and heterogeneous integration are analyzed. In addition, the impact of the power delivery noise and process variation for both devices and interconnects are quantified on the overall chip throughput.Ph.D

    Characterization and mitigation of process variation in digital circuits and systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 155-166).Process variation threatens to negate a whole generation of scaling in advanced process technologies due to performance and power spreads of greater than 30-50%. Mitigating this impact requires a thorough understanding of the variation sources, magnitudes and spatial components at the device, circuit and architectural levels. This thesis explores the impacts of variation at each of these levels and evaluates techniques to alleviate them in the context of digital circuits and systems. At the device level, we propose isolation and measurement of variation in the intrinsic threshold voltage of a MOSFET using sub-threshold leakage currents. Analysis of the measured data, from a test-chip implemented on a 0. 18[mu]m CMOS process, indicates that variation in MOSFET threshold voltage is a truly random process dependent only on device dimensions. Further decomposition of the observed variation reveals no systematic within-die variation components nor any spatial correlation. A second test-chip capable of characterizing spatial variation in digital circuits is developed and implemented in a 90nm triple-well CMOS process. Measured variation results show that the within-die component of variation is small at high voltages but is an increasing fraction of the total variation as power-supply voltage decreases. Once again, the data shows no evidence of within-die spatial correlation and only weak systematic components. Evaluation of adaptive body-biasing and voltage scaling as variation mitigation techniques proves voltage scaling is more effective in performance modification with reduced impact to idle power compared to body-biasing.(cont.) Finally, the addition of power-supply voltages in a massively parallel multicore processor is explored to reduce the energy required to cope with process variation. An analytic optimization framework is developed and analyzed; using a custom simulation methodology, total energy of a hypothetical 1K-core processor based on the RAW core is reduced by 6-16% with the addition of only a single voltage. Analysis of yield versus required energy demonstrates that a combination of disabling poor-performing cores and additional power-supply voltages results in an optimal trade-off between performance and energy.by Nigel Anthony Drego.Ph.D
    corecore