12 research outputs found

    Exploitation dynamique des données de production pour améliorer les méthodes DFM dans l'industrie Microélectronique

    Get PDF
    La conception pour la fabrication ou DFM (Design for Manufacturing) est une méthode maintenant classique pour assurer lors de la conception des produits simultanément la faisabilité, la qualité et le rendement de la production. Dans l'industrie microélectronique, le Design Rule Manual (DRM) a bien fonctionné jusqu'à la technologie 250nm avec la prise en compte des variations systématiques dans les règles et/ou des modèles basés sur l'analyse des causes profondes, mais au-delà de cette technologie, des limites ont été atteintes en raison de l'incapacité à sasir les corrélations entre variations spatiales. D'autre part, l'évolution rapide des produits et des technologies contraint à une mise à jour dynamique des DRM en fonction des améliorations trouvées dans les fabs. Dans ce contexte les contributions de thèse sont (i) une définition interdisciplinaire des AMDEC et analyse de risques pour contribuer aux défis du DFM dynamique, (ii) un modèle MAM (mapping and alignment model) de localisation spatiale pour les données de tests, (iii) un référentiel de données basé sur une ontologie ROMMII (referential ontology Meta model for information integration) pour effectuer le mapping entre des données hétérogènes issues de sources variées et (iv) un modèle SPM (spatial positioning model) qui vise à intégrer les facteurs spatiaux dans les méthodes DFM de la microélectronique, pour effectuer une analyse précise et la modélisation des variations spatiales basées sur l'exploitation dynamique des données de fabrication avec des volumétries importantes.The DFM (design for manufacturing) methods are used during technology alignment and adoption processes in the semiconductor industry (SI) for manufacturability and yield assessments. These methods have worked well till 250nm technology for the transformation of systematic variations into rules and/or models based on the single-source data analyses, but beyond this technology they have turned into ineffective R&D efforts. The reason for this is our inability to capture newly emerging spatial variations. It has led an exponential increase in technology lead times and costs that must be addressed; hence, objectively in this thesis we are focused on identifying and removing causes associated with the DFM ineffectiveness. The fabless, foundry and traditional integrated device manufacturer (IDM) business models are first analyzed to see coherence against a recent shift in business objectives from time-to-market (T2M) and time-to-volume towards (T2V) towards ramp-up rate. The increasing technology lead times and costs are identified as a big challenge in achieving quick ramp-up rates; hence, an extended IDM (e-IDM) business model is proposed to support quick ramp-up rates which is based on improving the DFM ineffectiveness followed by its smooth integration. We have found (i) single-source analyses and (ii) inability to exploit huge manufacturing data volumes as core limiting factors (failure modes) towards DFM ineffectiveness during technology alignment and adoption efforts within an IDM. The causes for single-source root cause analysis are identified as the (i) varying metrology reference frames and (ii) test structures orientations that require wafer rotation prior to the measurements, resulting in varying metrology coordinates (die/site level mismatches). A generic coordinates mapping and alignment model (MAM) is proposed to remove these die/site level mismatches, however to accurately capture the emerging spatial variations, we have proposed a spatial positioning model (SPM) to perform multi-source parametric correlation based on the shortest distance between respective test structures used to measure the parameters. The (i) unstructured model evolution, (ii) ontology issues and (iii) missing links among production databases are found as causes towards our inability to exploit huge manufacturing data volumes. The ROMMII (referential ontology Meta model for information integration) framework is then proposed to remove these issues and enable the dynamic and efficient multi-source root cause analyses. An interdisciplinary failure mode effect analysis (i-FMEA) methodology is also proposed to find cyclic failure modes and causes across the business functions which require generic solutions rather than operational fixes for improvement. The proposed e-IDM, MAM, SPM, and ROMMII framework results in accurate analysis and modeling of emerging spatial variations based on dynamic exploitation of the huge manufacturing data volumes.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Statistical circuit simulations - from ‘atomistic’ compact models to statistical standard cell characterisation

    Get PDF
    This thesis describes the development and application of statistical circuit simulation methodologies to analyse digital circuits subject to intrinsic parameter fluctuations. The specific nature of intrinsic parameter fluctuations are discussed, and we explain the crucial importance to the semiconductor industry of developing design tools which accurately account for their effects. Current work in the area is reviewed, and three important factors are made clear: any statistical circuit simulation methodology must be based on physically correct, predictive models of device variability; the statistical compact models describing device operation must be characterised for accurate transient analysis of circuits; analysis must be carried out on realistic circuit components. Improving on previous efforts in the field, we posit a statistical circuit simulation methodology which accounts for all three of these factors. The established 3-D Glasgow atomistic simulator is employed to predict electrical characteristics for devices aimed at digital circuit applications, with gate lengths from 35 nm to 13 nm. Using these electrical characteristics, extraction of BSIM4 compact models is carried out and their accuracy in performing transient analysis using SPICE is validated against well characterised mixed-mode TCAD simulation results for 35 nm devices. Static d.c. simulations are performed to test the methodology, and a useful analytic model to predict hard logic fault limitations on CMOS supply voltage scaling is derived as part of this work. Using our toolset, the effect of statistical variability introduced by random discrete dopants on the dynamic behaviour of inverters is studied in detail. As devices scaled, dynamic noise margin variation of an inverter is increased and higher output load or input slew rate improves the noise margins and its variation. Intrinsic delay variation based on CV/I delay metric is also compared using ION and IEFF definitions where the best estimate is obtained when considering ION and input transition time variations. Critical delay distribution of a path is also investigated where it is shown non-Gaussian. Finally, the impact of the cell input slew rate definition on the accuracy of the inverter cell timing characterisation in NLDM format is investigated

    Predictive formulae for OPC with applications to lithography-friendly routing

    No full text

    Predictive formulae for OPC with applications to lithography-friendly routing

    No full text

    Aerosol science and technology: History and reviews

    Get PDF
    Aerosol Science and Technology: History and Reviews captures an exciting slice of history in the evolution of aerosol science. It presents in-depth biographies of four leading international aerosol researchers and highlights pivotal research institutions in New York, Minnesota, and Austria. One collection of chapters reflects on the legacy of the Pasadena smog experiment, while another presents a fascinating overview of military applications and nuclear aerosols. Finally, prominent researchers offer detailed reviews of aerosol measurement, processes, experiments, and technology that changed the face of aerosol science. This volume is the third in a series and is supported by the American Association for Aerosol Research (AAAR) History Working Group, whose goal is to produce archival books from its symposiums on the history of aerosol science to ensure a lasting record. It is based on papers presented at the Third Aerosol History Symposium on September 8 and 9, 2006, in St. Paul, Minnesota, USA

    Optimization of Operation Sequencing in CAPP Using Hybrid Genetic Algorithm and Simulated Annealing Approach

    Get PDF
    In any CAPP system, one of the most important process planning functions is selection of the operations and corresponding machines in order to generate the optimal operation sequence. In this paper, the hybrid GA-SA algorithm is used to solve this combinatorial optimization NP (Non-deterministic Polynomial) problem. The network representation is adopted to describe operation and sequencing flexibility in process planning and the mathematical model for process planning is described with the objective of minimizing the production time. Experimental results show effectiveness of the hybrid algorithm that, in comparison with the GA and SA standalone algorithms, gives optimal operation sequence with lesser computational time and lesser number of iterations

    Optimization of Operation Sequencing in CAPP Using Hybrid Genetic Algorithm and Simulated Annealing Approach

    Get PDF
    In any CAPP system, one of the most important process planning functions is selection of the operations and corresponding machines in order to generate the optimal operation sequence. In this paper, the hybrid GA-SA algorithm is used to solve this combinatorial optimization NP (Non-deterministic Polynomial) problem. The network representation is adopted to describe operation and sequencing flexibility in process planning and the mathematical model for process planning is described with the objective of minimizing the production time. Experimental results show effectiveness of the hybrid algorithm that, in comparison with the GA and SA standalone algorithms, gives optimal operation sequence with lesser computational time and lesser number of iterations

    Autonomous Navigation of Automated Guided Vehicle Using Monocular Camera

    Get PDF
    This paper presents a hybrid control algorithm for Automated Guided Vehicle (AGV) consisting of two independent control loops: Position Based Control (PBC) for global navigation within manufacturing environment and Image Based Visual Servoing (IBVS) for fine motions needed for accurate steering towards loading/unloading point. The proposed hybrid control separates the initial transportation task into global navigation towards the goal point, and fine motion from the goal point to the loading/unloading point. In this manner, the need for artificial landmarks or accurate map of the environment is bypassed. Initial experimental results show the usefulness of the proposed approach.COBISS.SR-ID 27383808
    corecore