31 research outputs found

    Optical Proximity Correction (OPC) Under Immersion Lithography

    Get PDF
    As advanced technology nodes continue scaling down into sub-16 nm regime, optical microlithography becomes more vulnerable to process variations. As a result, overall lithographic yield continuously degrades. Since next-generation lithography (NGL) is still not mature enough, the industry relies heavily on resolution enhancement techniques (RETs), wherein optical proximity correction (OPC) with 193 nm immersion lithography is dominant in the foreseeable future. However, OPC algorithms are getting more aggressive. Consequently, complex mask solutions are outputted. Furthermore, this results in long computation time along with mask data volume explosion. In this chapter, recent state-of-the-art OPC algorithms are discussed. Thereafter, the performance of a recently published fast OPC methodology—to generate highly manufactured mask solutions with acceptable pattern fidelity under process variations—is verified on the public benchmarks

    Design, Fabrication, and Run-time Strategies for Hardware-Assisted Security

    Get PDF
    Today, electronic computing devices are critically involved in our daily lives, basic infrastructure, and national defense systems. With the growing number of threats against them, hardware-based security features offer the best chance for building secure and trustworthy cyber systems. In this dissertation, we investigate ways of making hardware-based security into a reality with primary focus on two areas: Hardware Trojan Detection and Physically Unclonable Functions (PUFs). Hardware Trojans are malicious modifications made to original IC designs or layouts that can jeopardize the integrity of hardware and software platforms. Since most modern systems critically depend on ICs, detection of hardware Trojans has garnered significant interest in academia, industry, as well as governmental agencies. The majority of existing detection schemes focus on test-time because of the limited hardware resources available at run-time. In this dissertation, we explore innovative run-time solutions that utilize on-chip thermal sensor measurements and fundamental estimation/detection theory to expose changes in IC power/thermal profile caused by Trojan activation. The proposed solutions are low overhead and also generalizable to many other sensing modalities and problem instances. Simulation results using state-of-the-art tools on publicly available Trojan benchmarks verify that our approaches can detect Trojans quickly and with few false positives. Physically Unclonable Functions (PUFs) are circuits that rely on IC fabrication variations to generate unique signatures for various security applications such as IC authentication, anti-counterfeiting, cryptographic key generation, and tamper resistance. While the existence of variations has been well exploited in PUF design, knowledge of exactly how variations come into existence has largely been ignored. Yet, for several decades the Design-for-Manufacturability (DFM) community has actually investigated the fundamental sources of these variations. Furthermore, since manufacturing variations are often harmful to IC yield, the existing DFM tools have been geared towards suppressing them (counter-intuitive for PUFs). In this dissertation, we make several improvements over current state-of-the-art work in PUFs. First, our approaches exploit existing DFM models to improve PUFs at physical layout and mask generation levels. Second, our proposed algorithms reverse the role of standard DFM tools and extend them towards improving PUF quality without harming non-PUF portions of the IC. Finally, since our approaches occur after design and before fabrication, they are applicable to all types of PUFs and have little overhead in terms of area, power, etc. The innovative and unconventional techniques presented in this dissertation should act as important building blocks for future work in cyber security

    Design for Manufacturing in IC Fabrication: Mask Cost, Circuit Performance and Convergence

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Electrical Design for Manufacturability Solutions: Fast Systematic Variation Analysis and Design Enhancement Techniques

    Get PDF
    The primary objectives in this research are to develop computer-aided design (CAD) tools for Design for Manufacturability (DFM) solutions that enable designers to conduct more rapid and more accurate systematic variation analysis, with different design enhancement techniques. Four main CAD tools are developed throughout my thesis. The first CAD tool facilitates a quantitative study of the impact of systematic variations for different circuits' electrical and geometrical behavior. This is accomplished by automatically performing an extensive analysis of different process variations (lithography and stress) and their dependency on the design context. Such a tool helps to explore and evaluate the systematic variation impact on any type of design. Secondly, solutions in the industry focus on the "design and then fix philosophy", or "fix during design philosophy", whereas the next CAD tool involves the "fix before design philosophy". Here, the standard cell library is characterized in different design contexts, different resolution enhancement techniques, and different process conditions, generating a fully DFM-aware standard cell library using a newly developed methodology that dramatically reduce the required number of silicon simulations. Several experiments are conducted on 65nm and 45nm designs, and demonstrate more robust and manufacturable designs that can be implemented by using the DFM-aware standard cell library. Thirdly, a novel electrical-aware hotspot detection solution is developed by using a device parameter-based matching technique since the state-of-the-art hotspot detection solutions are all geometrical based. This CAD tool proposes a new philosophy by detecting yield limiters, also known as hotspots, through the model parameters of the device, presented in the SPICE netlist. This novel hotspot detection methodology is tested and delivers extraordinary fast and accurate results. Finally, the existing DFM solutions, mainly address the digital designs. Process variations play an increasingly important role in the success of analog circuits. Knowledge of the parameter variances and their contribution patterns is crucial for a successful design process. This information is valuable to find solutions for many problems in design, design automation, testing, and fault tolerance. The fourth CAD solution, proposed in this thesis, introduces a variability-aware DFM solution that detects, analyze, and automatically correct hotspots for analog circuits

    Managing Variability in VLSI Circuits.

    Full text link
    Over the last two decades, Design for Manufacturing (DFM) has emerged as an essential field within the semiconductor industry. The main objective of DFM is to reduce and, if possible, eliminate variability in integrated circuits (ICs). Numerous techniques for managing variation have emerged throughout IC design: manufacturers design instruments with minute tolerances, process engineers calibrate and characterize a given process throughout its lifetime, and IC designers strive to model and characterize variability within their devices, libraries, and circuits. This dissertation focuses on the last of these three techniques and presents material relevant to managing variability within IC design. Since characterization and modeling are essential to the analysis and reduction of variation in modern-day designs, this dissertation begins by studying various correlation models used within Statistical Static Timing Analysis (SSTA). In the end, the study shows that using complex correlation models does not necessarily result in significant error reduction within SSTA, and that simple models (which only include die-to-die and random variation) can therefore be used to achieve similar accuracy with reduced overhead and run-time. Next, the variation models, themselves, are explored and a new critical dimension (CD) model is proposed which reduces standard deviation error in SSTA by ~3X. Finally, the focus changes from the timing analysis level and moves lower in the design hierarchy to the libraries and devices that comprise the backbone of IC design. The final three chapters study mechanical stress enhancement and discuss how to fully exploit the layout dependencies of mechanically stressed silicon. The first of these three chapters presents an optimization scheme that uses the layout dependencies of stress in conjunction with dual-threshold-voltage (Vth) assignment to decrease leakage power consumption by ~24%. Next, the second of the three chapters proposes a new standard cell library design methodology, called “STEEL.” STEEL provides average delay improvements of 11% over equivalent single-Vth implementations, while consuming 2.5X less leakage than the dual-Vth alternative. Finally, the stress enhanced studies (and this document) are concluded by a new optimization scheme that combines stress enhancement with gate length biasing to achieve 2.9X leakage power savings in IC designs without modifying Vth.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75947/1/btcline_1.pd

    Predicting power scalability in a reconfigurable platform

    Get PDF
    This thesis focuses on the evolution of digital hardware systems. A reconfigurable platform is proposed and analysed based on thin-body, fully-depleted silicon-on-insulator Schottky-barrier transistors with metal gates and silicide source/drain (TBFDSBSOI). These offer the potential for simplified processing that will allow them to reach ultimate nanoscale gate dimensions. Technology CAD was used to show that the threshold voltage in TBFDSBSOI devices will be controllable by gate potentials that scale down with the channel dimensions while remaining within appropriate gate reliability limits. SPICE simulations determined that the magnitude of the threshold shift predicted by TCAD software would be sufficient to control the logic configuration of a simple, regular array of these TBFDSBSOI transistors as well as to constrain its overall subthreshold power growth. Using these devices, a reconfigurable platform is proposed based on a regular 6-input, 6-output NOR LUT block in which the logic and configuration functions of the array are mapped onto separate gates of the double-gate device. A new analytic model of the relationship between power (P), area (A) and performance (T) has been developed based on a simple VLSI complexity metric of the form ATσ = constant. As σ defines the performance “return” gained as a result of an increase in area, it also represents a bound on the architectural options available in power-scalable digital systems. This analytic model was used to determine that simple computing functions mapped to the reconfigurable platform will exhibit continuous power-area-performance scaling behavior. A number of simple arithmetic circuits were mapped to the array and their delay and subthreshold leakage analysed over a representative range of supply and threshold voltages, thus determining a worse-case range for the device/circuit-level parameters of the model. Finally, an architectural simulation was built in VHDL-AMS. The frequency scaling described by σ, combined with the device/circuit-level parameters predicts the overall power and performance scaling of parallel architectures mapped to the array

    Miniaturized Transistors

    Get PDF
    What is the future of CMOS? Sustaining increased transistor densities along the path of Moore's Law has become increasingly challenging with limited power budgets, interconnect bandwidths, and fabrication capabilities. In the last decade alone, transistors have undergone significant design makeovers; from planar transistors of ten years ago, technological advancements have accelerated to today's FinFETs, which hardly resemble their bulky ancestors. FinFETs could potentially take us to the 5-nm node, but what comes after it? From gate-all-around devices to single electron transistors and two-dimensional semiconductors, a torrent of research is being carried out in order to design the next transistor generation, engineer the optimal materials, improve the fabrication technology, and properly model future devices. We invite insight from investigators and scientists in the field to showcase their work in this Special Issue with research papers, short communications, and review articles that focus on trends in micro- and nanotechnology from fundamental research to applications

    A Fast Mask Manufacturability and Process Variation Aware OPC Algorithm with Exploiting a Novel Intensity Estimation Model

    No full text
    corecore