3,604 research outputs found

    Variability-driven module selection with joint design time optimization and post-silicon tuning

    Full text link
    Abstract—Increasing delay and power variation are significant chal-lenges to the designers as technology scales to the deep sub-micron (DSM) regime. Traditional module selection techniques in high level synthesis use worst case delay/power information to perform the optimization, and therefore may be too pessimistic such that extra resources are used to guarantee design requirements. Parametric yield, which is defined as the probability of the synthesized hardware meeting the performance/power constraints, can be used to guide design space exploration. The para-metric yield can be effectively improved by combining both design-time variation-aware optimization and post silicon tuning techniques (such as adaptive body biasing (ABB)). In this paper, we propose a module selection algorithm that combines design-time optimization with post-silicon tuning (using ABB) to maximize design yield. A variation-aware module selection algorithm based on efficient performance and power yield gradient computation is developed. The post silicon optimization is formulated as an efficient sequential conic program to determine the optimal body bias distribution, which in turn affects design-time module selection. The experiment results show that significant yield can be achieved compared to traditional worst-case driven module selection technique. To the best of our knowledge, this is the first variability-driven high level synthesis technique that considers post-silicon tuning during design time optimization. 1 I

    Adaptive Integrated Circuit Design for Variation Resilience and Security

    Get PDF
    The past few decades witness the burgeoning development of integrated circuit in terms of process technology scaling. Along with the tremendous benefits coming from the scaling, challenges are also presented in various stages. During the design time, the complexity of developing a circuit with millions to billions of smaller size transistors is extended after the variations are taken into account. The difficulty of analyzing these nondeterministic properties makes the allocation scheme of redundant resource hardly work in a cost-efficient way. Besides fabrication variations, analog circuits are suffered from severe performance degradations owing to their physical attributes which are vulnerable to aging effects. As such, the post-silicon calibration approach gains increasing attentions to compensate the performance mismatch. For the user-end applications, additional system failures result from the pirated and counterfeited devices provided by the untrusted semiconductor supply chain. Again analog circuits show their weakness to this threat due to the shortage of piracy avoidance techniques. In this dissertation, we propose three adaptive integrated circuit designs to overcome these challenges respectively. The first one investigates the variability-aware gate implementation with the consideration of the overhead control of adaptivity assignment. This design improves the variation resilience typically for digital circuits while optimizing the power consumption and timing yield. The second design is implemented as a self-validation system for the calibration of diverse analog circuits. The system is completely integrated on chip to enhance the convenience without external assistance. In the last design, a classic analog component is further studied to establish the configurable locking mechanism for analog circuits. The use of Satisfiability Modulo Theories addresses the difficulty of searching the unique unlocking pattern of non-Boolean variables

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Standard cell library design for sub-threshold operation

    Get PDF

    Joint design-time and post-silicon optimization for digitally tuned analog circuits,” ICCAD

    Get PDF
    Abstract-Joint design time and post-silicon optimization for analog circuits has been an open problem in literature because of the complex nature of analog circuit modeling and optimization. In this paper we formulate the co-optimization problem for digitally tuned analog circuits to optimize the parametric yield, subject to power and area constraints. A general optimization framework combing the branch-andbound algorithm and gradient ascent method is proposed. We demonstrate our framework with two examples in high-speed serial link, the transmitter design and the phase-locked-loop (PLL) design. Simulation results show that compared with the design heuristic from analog designers' perspective, joint design-time and post-silicon optimization can improve the yield by up to 47% for transmitter design and up to 56% for PLL design under the same area and power constraints. To the best of the authors' knowledge, this is the first yield-driven analog circuit design technique that optimizes post-silicon tuning together with the design-time optimization

    Statistical Methods for Semiconductor Manufacturing

    Get PDF
    In this thesis techniques for non-parametric modeling, machine learning, filtering and prediction and run-to-run control for semiconductor manufacturing are described. In particular, algorithms have been developed for two major applications area: - Virtual Metrology (VM) systems; - Predictive Maintenance (PdM) systems. Both technologies have proliferated in the past recent years in the semiconductor industries, called fabs, in order to increment productivity and decrease costs. VM systems aim of predicting quantities on the wafer, the main and basic product of the semiconductor industry, that may be physically measurable or not. These quantities are usually ’costly’ to be measured in economic or temporal terms: the prediction is based on process variables and/or logistic information on the production that, instead, are always available and that can be used for modeling without further costs. PdM systems, on the other hand, aim at predicting when a maintenance action has to be performed. This approach to maintenance management, based like VM on statistical methods and on the availability of process/logistic data, is in contrast with other classical approaches: - Run-to-Failure (R2F), where there are no interventions performed on the machine/process until a new breaking or specification violation happens in the production; - Preventive Maintenance (PvM), where the maintenances are scheduled in advance based on temporal intervals or on production iterations. Both aforementioned approaches are not optimal, because they do not assure that breakings and wasting of wafers will not happen and, in the case of PvM, they may lead to unnecessary maintenances without completely exploiting the lifetime of the machine or of the process. The main goal of this thesis is to prove through several applications and feasibility studies that the use of statistical modeling algorithms and control systems can improve the efficiency, yield and profits of a manufacturing environment like the semiconductor one, where lots of data are recorded and can be employed to build mathematical models. We present several original contributions, both in the form of applications and methods. The introduction of this thesis will be an overview on the semiconductor fabrication process: the most common practices on Advanced Process Control (APC) systems and the major issues for engineers and statisticians working in this area will be presented. Furthermore we will illustrate the methods and mathematical models used in the applications. We will then discuss in details the following applications: - A VM system for the estimation of the thickness deposited on the wafer by the Chemical Vapor Deposition (CVD) process, that exploits Fault Detection and Classification (FDC) data is presented. In this tool a new clustering algorithm based on Information Theory (IT) elements have been proposed. In addition, the Least Angle Regression (LARS) algorithm has been applied for the first time to VM problems. - A new VM module for multi-step (CVD, Etching and Litography) line is proposed, where Multi-Task Learning techniques have been employed. - A new Machine Learning algorithm based on Kernel Methods for the estimation of scalar outputs from time series inputs is illustrated. - Run-to-Run control algorithms that employ both the presence of physical measures and statistical ones (coming from a VM system) is shown; this tool is based on IT elements. - A PdM module based on filtering and prediction techniques (Kalman Filter, Monte Carlo methods) is developed for the prediction of maintenance interventions in the Epitaxy process. - A PdM system based on Elastic Nets for the maintenance predictions in Ion Implantation tool is described. Several of the aforementioned works have been developed in collaborations with major European semiconductor companies in the framework of the European project UE FP7 IMPROVE (Implementing Manufacturing science solutions to increase equiPment pROductiVity and fab pErformance); such collaborations will be specified during the thesis, underlying the practical aspects of the implementation of the proposed technologies in a real industrial environment

    Investigation into yield and reliability enhancement of TSV-based three-dimensional integration circuits

    No full text
    Three dimensional integrated circuits (3D ICs) have been acknowledged as a promising technology to overcome the interconnect delay bottleneck brought by continuous CMOS scaling. Recent research shows that through-silicon-vias (TSVs), which act as vertical links between layers, pose yield and reliability challenges for 3D design. This thesis presents three original contributions.The first contribution presents a grouping-based technique to improve the yield of 3D ICs under manufacturing TSV defects, where regular and redundant TSVs are partitioned into groups. In each group, signals can select good TSVs using rerouting multiplexers avoiding defective TSVs. Grouping ratio (regular to redundant TSVs in one group) has an impact on yield and hardware overhead. Mathematical probabilistic models are presented for yield analysis under the influence of independent and clustering defect distributions. Simulation results using MATLAB show that for a given number of TSVs and TSV failure rate, careful selection of grouping ratio results in achieving 100% yield at minimal hardware cost (number of multiplexers and redundant TSVs) in comparison to a design that does not exploit TSV grouping ratios. The second contribution presents an efficient online fault tolerance technique based on redundant TSVs, to detect TSV manufacturing defects and address thermal-induced reliability issue. The proposed technique accounts for both fault detection and recovery in the presence of three TSV defects: voids, delamination between TSV and landing pad, and TSV short-to-substrate. Simulations using HSPICE and ModelSim are carried out to validate fault detection and recovery. Results show that regular and redundant TSVs can be divided into groups to minimise area overhead without affecting the fault tolerance capability of the technique. Synthesis results using 130-nm design library show that 100% repair capability can be achieved with low area overhead (4% for the best case). The last contribution proposes a technique with joint consideration of temperature mitigation and fault tolerance without introducing additional redundant TSVs. This is achieved by reusing spare TSVs that are frequently deployed for improving yield and reliability in 3D ICs. The proposed technique consists of two steps: TSV determination step, which is for achieving optimal partition between regular and spare TSVs into groups; The second step is TSV placement, where temperature mitigation is targeted while optimizing total wirelength and routing difference. Simulation results show that using the proposed technique, 100% repair capability is achieved across all (five) benchmarks with an average temperature reduction of 75.2? (34.1%) (best case is 99.8? (58.5%)), while increasing wirelength by a small amount

    Doctor of Philosophy

    Get PDF
    dissertationIn order to ensure high production yield of semiconductor devices, it is desirable to characterize intermediate progress towards the final product by using metrology tools to acquire relevant measurements after each sequential processing step. The metrology data are commonly used in feedback and feed-forward loops of Run-to-Run (R2R) controllers to improve process capability and optimize recipes from lot-to-lot or batch-to-batch. In this dissertation, we focus on two related issues. First, we propose a novel non-threaded R2R controller that utilizes all available metrology measurements, even when the data were acquired during prior runs that differed in their contexts from the current fabrication thread. The developed controller is the first known implementation of a non-threaded R2R control strategy that was successfully deployed in the high-volume production semiconductor fab. Its introduction improved the process capability by 8% compared with the traditional threaded R2R control and significantly reduced out of control (OOC) events at one of the most critical steps in NAND memory manufacturing. The second contribution demonstrates the value of developing virtual metrology (VM) estimators using the insight gained from multiphysics models. Unlike the traditional statistical regression techniques, which lead to linear models that depend on a linear combination of the available measurements, we develop VM models, the structure of which and the functional interdependence between their input and output variables are determined from the insight provided by the multiphysics describing the operation of the processing step for which the VM system is being developed. We demonstrate this approach for three different processes, and describe the superior performance of the developed VM systems after their first-of-a-kind deployment in a high-volume semiconductor manufacturing environment

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
    • …
    corecore