2,248 research outputs found

    Efficient vlsi yield prediction with consideration of partial correlations

    Get PDF
    With the emergence of the deep submicron era, process variations have gained importance in issues related to chip design. The impact of process variations is measured using manufacturing/parametric yield. In order to get an accurate estimate of yield, the parameters considered need to be monitored at a large number of locations. Nowadays, intra-die variations are an integral part of the overall process uctuations. This leads to the difficult case where yield prediction has to be done while considering independent and partially correlated variations. The presence of partial correlations adds to the existing trouble caused by the volume of variables. This thesis proposes two techniques for reducing the number of variables and hence the complexity of the yield computation problem namely - Principal Component Analysis (PCA) and Hierarchical Adaptive Quadrisection (HAQ). Systematic process variations are also included in our yield model. The biggest plus in these two methods is reducing the size of the yield prediction problem (thus making it less time complex) without affecting the accuracy in yield. The efficiency of these two approaches is measured by comparing with the results obtained from Monte Carlo simulations. Compared to previous work, the PCA based method can reduce the error in yield estimation from 17.1% - 21.1% to 1.3% - 2.8% with 4.6x speedup. The HAQ technique can reduce the error to 4.1% - 5.6% with 6x - 9.4x speedup

    Pixel Detectors for Charged Particles

    Full text link
    Pixel Detectors, as the current technology of choice for the innermost vertex detection, have reached a stage at which large detectors have been built for the LHC experiments and a new era of developments, both for hybrid and for monolithic or semi-monolithic pixel detectors is in full swing. This is largely driven by the requirements of the upgrade programme for the superLHC and by other collider experiments which plan to use monolithic pixel detectors for the first time. A review on current pixel detector developments for particle tracking and vertexing is given, comprising hybrid pixel detectors for superLHC with its own challenges in radiation and rate, as well as on monolithic, so-called active pixel detectors, including MAPS and DEPFET pixels for RHIC and superBelle.Comment: 19 pages, 23 drawings in 14 figure

    Development of a fully-depleted thin-body FinFET process

    Get PDF
    The goal of this work is to develop the processes needed for the demonstration of a fully-depleted (FD) thin-body fin field effect transistor (FinFET). Recognized by the 2003 International Technology Roadmap for Semiconductors as an emerging non-classical CMOS technology, FinFETs exhibit high drive current, reduced short-channel effects, an extreme scalability to deep submicron regimes. The approach used in this study will build on previous FinFET research, along with new concepts and technologies. The critical aspects of this research are: (1) thin body creation using spacer etchmasks and oxidation/etchback schemes, (2) use of an oxynitride gate dielectric, (3) silicon crystal orientation effect evaluation, and (4) creation of fully-depleted FinFET devices of submicron gate length on Silicon-on-Insulator (SOI) substrates. The developed process yielded functional FinFETs of both thin body and wide body variety. Electrical tests were employed to describe device behaviour, including their subthreshold characteristics, standard operation, effects of gate misalignment on device performance, and impact of crystal orientation on device drive current. The process is shown to have potential for deep submicron regimes of fin width and gate length, and provides a good foundation for further research of FinFETs and similar technologies at RIT

    Radiation-induced edge effects in deep submicron CMOS transistors

    Get PDF
    The study of the TID response of transistors and isolation test structures in a 130 nm commercial CMOS technology has demonstrated its increased radiation tolerance with respect to older technology nodes. While the thin gate oxide of the transistors is extremely tolerant to dose, charge trapping at the edge of the transistor still leads to leakage currents and, for the narrow channel transistors, to significant threshold voltage shift-an effect that we call Radiation Induced Narrow Channel Effect (RINCE)

    Moving RIT to Submicron Technology: Fabrication of 0.5ÎĽm P-Channel MOS Transistors

    Get PDF
    In this investigation, efforts have been made to move the Microelectronic Engineering Program at Rochester Institute of Technology to the next technology node by developing and fabricating a 0.5ÎĽm PMOS process. Currently, RIT is fabricating 1.0ÎĽm CMOS devices. A successful 0.5ÎĽm PMOS process can be incorporated into a full flow 0.5ÎĽm CMOS process. Both process and electrical simulations were done in order to predict performance. Key process features include blanket n-well, LOCOS isolation, 15nm gate oxide, i-line lithography, self-aligned source and drain, P+ doped polysilicon gates, and shallow source and drains. A test chip was created and the fabrication process was completed. The process was unable to produce working devices. The failure mode is residual oxide in the contacts to the polysilicon gates

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Energy challenges for ICT

    Get PDF
    The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT

    Statistical Characterization and Decomposition of SRAM cell Variability and Aging

    Get PDF
    abstract: Memories play an integral role in today's advanced ICs. Technology scaling has enabled high density designs at the price paid for impact due to variability and reliability. It is imperative to have accurate methods to measure and extract the variability in the SRAM cell to produce accurate reliability projections for future technologies. This work presents a novel test measurement and extraction technique which is non-invasive to the actual operation of the SRAM memory array. The salient features of this work include i) A single ended SRAM test structure with no disturbance to SRAM operations ii) a convenient test procedure that only requires quasi-static control of external voltages iii) non-iterative method that extracts the VTH variation of each transistor from eight independent switch point measurements. With the present day technology scaling, in addition to the variability with the process, there is also the impact of other aging mechanisms which become dominant. The various aging mechanisms like Negative Bias Temperature Instability (NBTI), Channel Hot Carrier (CHC) and Time Dependent Dielectric Breakdown (TDDB) are critical in the present day nano-scale technology nodes. In this work, we focus on the impact of NBTI due to aging in the SRAM cell and have used Trapping/De-Trapping theory based log(t) model to explain the shift in threshold voltage VTH. The aging section focuses on the following i) Impact of Statistical aging in PMOS device due to NBTI dominates the temporal shift of SRAM cell ii) Besides static variations , shifting in VTH demands increased guard-banding margins in design stage iii) Aging statistics remain constant during the shift, presenting a secondary effect in aging prediction. iv) We have investigated to see if the aging mechanism can be used as a compensation technique to reduce mismatch due to process variations. Finally, the entire test setup has been tested in SPICE and also validated with silicon and the results are presented. The method also facilitates the study of design metrics such as static, read and write noise margins and also the data retention voltage and thus help designers to improve the cell stability of SRAM.Dissertation/ThesisM.S. Electrical Engineering 201

    Downscaling of 0.35 J.lm to 0.25 J.lm CMOS Transistor by Simulation

    Get PDF
    Silicon (Si) based integrated circuit (IC) has become the backbone of today's semiconductor world with MOS transistors as its fundamental building blocks. The integrated circuit complexity has moved from the early small-scale integration (SSI) to ultra-large-scale integration (ULSI) that can accommodate millions of transistors on a single chip. This evolution is primarily attributed to the concept of device miniaturization. The resulting scaledown devices do not only improve the packing density but also exhibit enhanced performance in terms of faster switching speed and lower power dissipation. The objective of this work is to perform downscaling of 0.35 Jll11 to 0.25 Jll11 CMOS transistor using Silvaco 2-D ATHENA and ATLAS simulation tool. A "two-step design" approach is proposed in this work to study the feasibility of miniaturization process by scaling method. A scaling factor, K of 1.4 (derived from direct division of 0.35 with 0.25) is adopted for selected parameters. The first design step involves a conversion of the physical data of 0.35 Jll11 CMOS technology to the simulated environment, where process recipe acquired from UC Berkeley Microfabrication Lab serves as the design basis. The electrical data for the simulated structure of 0.35 11m CMOS was extracted with the use of the device simulator. Using the simulated, optimized 0.35 Jll11 structure, downscaling to a smaller geometry of 0.25 Jll11 CMOS transistor was carried out and subsequent electrical characterization was performed in order to evaluate its performance. Parameters that are monitored to evaluate the performance of the designed 0.25 Jll11 CMOS transistor include threshold voltage (VtJJ, saturation current (ldsaJ, off-state leakage current (Ion) and subthreshold swing (SJ. From the simulation, the V1h obtained is of 0.51 V and -0.4 V for NMOS and PMOS respectively, with a difference of 15%-33% as compared to other reported work. However, for results of Idsat. the values obtained which is of 296 ~-tAIJll11 for NMOS and 181 J.lA/Jll11 for PMOS is much lower than other reported work by 28%-50%. This is believed to be due to direct scaling of 0.25 Jll11 transistor from the 0.35 11m geometry without alterations on the existing structure. For Ioffand St. both results show a much better value as compared to other work. I off obtained which is of <1 0 pA/J.lm is about 80%-96% lower than the maximum allowable specification. As for S1, the values obtained which is <90 mY/dec is only within 5% differences as compared to specification. In overall, these results (except for Idsat) accepted values for the particular 0.25 J..Lm technology. From this work, the capability to perform device miniaturization from 0.35 J..Lffi to 0.25 J..Lffi has been developed. This is achieved by acquiring the technical know-how on the important aspects of simulation required for successful simulation of 0.35 J..Lffi technology. Ultimately, the outcome of this work which is a simulated 0.25 J..Lm CMOS transistor can be used as a basis for scaling down to a much smaller device, namely towards 90-nrn geometry

    Seven strategies for tolerating highly defective fabrication

    Get PDF
    In this article we present an architecture that supports fine-grained sparing and resource matching. The base logic structure is a set of interconnected PLAs. The PLAs and their interconnections consist of large arrays of interchangeable nanowires, which serve as programmable product and sum terms and as programmable interconnect links. Each nanowire can have several defective programmable junctions. We can test nanowires for functionality and use only the subset that provides appropriate conductivity and electrical characteristics. We then perform a matching between nanowire junction programmability and application logic needs to use almost all the nanowires even though most of them have defective junctions. We employ seven high-level strategies to achieve this level of defect tolerance
    • …
    corecore