49 research outputs found

    Yield modeling for deep sub-micron IC design

    Get PDF

    Information-rich quality controls prediction model based on non-destructive analysis for porosity determination of AISI H13 produced by electron beam melting

    Get PDF
    The number of materials processed via additive manufacturing (AM) technologies has rapidly increased over the past decade. As of these emerging technologies, electron beam powder bed fusion (EB-PBF) process is becoming an enabling technology to manufacture complex-shaped components made of thermal-cracking sensitive materials, such as AISI H13 hot-work tool steel. In this process, a proper combination of process parameters should be employed to produce dense parts. Therefore, one of the first steps in the EB-PBF part production is to perform the process parameter optimization procedure. However, the conventional procedure that includes the image analysis of the cross-section of several as-built samples is time-consuming and costly. Hence, a new model is introduced in this work to find the best combination of EB-PBF process parameters concisely and cost-effectively. A correlation between the surface topography, the internal porosity, and the process parameters is established. The correlation between the internal porosity and the melting process parameters has been described by a high robust model (R-adj(2) = 0.91) as well as the correlation of topography parameters and melting process parameters (R-adj(2) = 0.77-0.96). Finally, a robust and information-rich prediction model for evaluating the internal porosity is proposed (R-adj(2) = 0.95) based on in situ surface topography characterization and process parameters. The information-rich prediction model allows obtaining more robust and representative model, yielding an improvement of about 4% with respect to the process parameter-based model. The model is experimentally validated showing adequate performances, with a RMSE of 2% on the predicted porosity. This result can support process and quality control designers in optimizing resource usage towards zero-defect manufacturing by reducing scraps and waste from destructive quality controls and reworks

    Exploitation dynamique des données de production pour améliorer les méthodes DFM dans l'industrie Microélectronique

    Get PDF
    La conception pour la fabrication ou DFM (Design for Manufacturing) est une méthode maintenant classique pour assurer lors de la conception des produits simultanément la faisabilité, la qualité et le rendement de la production. Dans l'industrie microélectronique, le Design Rule Manual (DRM) a bien fonctionné jusqu'à la technologie 250nm avec la prise en compte des variations systématiques dans les règles et/ou des modèles basés sur l'analyse des causes profondes, mais au-delà de cette technologie, des limites ont été atteintes en raison de l'incapacité à sasir les corrélations entre variations spatiales. D'autre part, l'évolution rapide des produits et des technologies contraint à une mise à jour dynamique des DRM en fonction des améliorations trouvées dans les fabs. Dans ce contexte les contributions de thèse sont (i) une définition interdisciplinaire des AMDEC et analyse de risques pour contribuer aux défis du DFM dynamique, (ii) un modèle MAM (mapping and alignment model) de localisation spatiale pour les données de tests, (iii) un référentiel de données basé sur une ontologie ROMMII (referential ontology Meta model for information integration) pour effectuer le mapping entre des données hétérogènes issues de sources variées et (iv) un modèle SPM (spatial positioning model) qui vise à intégrer les facteurs spatiaux dans les méthodes DFM de la microélectronique, pour effectuer une analyse précise et la modélisation des variations spatiales basées sur l'exploitation dynamique des données de fabrication avec des volumétries importantes.The DFM (design for manufacturing) methods are used during technology alignment and adoption processes in the semiconductor industry (SI) for manufacturability and yield assessments. These methods have worked well till 250nm technology for the transformation of systematic variations into rules and/or models based on the single-source data analyses, but beyond this technology they have turned into ineffective R&D efforts. The reason for this is our inability to capture newly emerging spatial variations. It has led an exponential increase in technology lead times and costs that must be addressed; hence, objectively in this thesis we are focused on identifying and removing causes associated with the DFM ineffectiveness. The fabless, foundry and traditional integrated device manufacturer (IDM) business models are first analyzed to see coherence against a recent shift in business objectives from time-to-market (T2M) and time-to-volume towards (T2V) towards ramp-up rate. The increasing technology lead times and costs are identified as a big challenge in achieving quick ramp-up rates; hence, an extended IDM (e-IDM) business model is proposed to support quick ramp-up rates which is based on improving the DFM ineffectiveness followed by its smooth integration. We have found (i) single-source analyses and (ii) inability to exploit huge manufacturing data volumes as core limiting factors (failure modes) towards DFM ineffectiveness during technology alignment and adoption efforts within an IDM. The causes for single-source root cause analysis are identified as the (i) varying metrology reference frames and (ii) test structures orientations that require wafer rotation prior to the measurements, resulting in varying metrology coordinates (die/site level mismatches). A generic coordinates mapping and alignment model (MAM) is proposed to remove these die/site level mismatches, however to accurately capture the emerging spatial variations, we have proposed a spatial positioning model (SPM) to perform multi-source parametric correlation based on the shortest distance between respective test structures used to measure the parameters. The (i) unstructured model evolution, (ii) ontology issues and (iii) missing links among production databases are found as causes towards our inability to exploit huge manufacturing data volumes. The ROMMII (referential ontology Meta model for information integration) framework is then proposed to remove these issues and enable the dynamic and efficient multi-source root cause analyses. An interdisciplinary failure mode effect analysis (i-FMEA) methodology is also proposed to find cyclic failure modes and causes across the business functions which require generic solutions rather than operational fixes for improvement. The proposed e-IDM, MAM, SPM, and ROMMII framework results in accurate analysis and modeling of emerging spatial variations based on dynamic exploitation of the huge manufacturing data volumes.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Modeling multiphase flow and substrate deformation in nanoimprint manufacturing systems

    Get PDF
    Nanopatterns found in nature demonstrate that macroscopic properties of a surface are tied to its nano-scale structure. Tailoring the nanostructure allows those macroscopic surface properties to be engineered. However, a capability-gap in manufacturing technology inhibits mass-production of nanotechnologies based on simple, nanometer-scale surface patterns. This gap represents an opportunity for research and development of nanoimprint lithography (NIL) processes. NIL is a process for replicating patterns by imprinting a fluid layer with a solid, nano-patterned template, after which ultraviolet cure solidifies the fluid resulting in a nano-patterned surface. Although NIL has been demonstrated to replicate pattern features as small as 4 nm, there are significant challenges in using it to produce nanotechnology. Ink-jet deposition methods deliver the small fluid volumes necessary to produce the nanopattern, and drop volumes can be tuned to what the pattern requires. However the drops trap pockets of gas as they merge and fill the template, and due to relatively slow gas dissolution, reduce processing throughput. Capillary forces that arise from the gas-liquid interfaces drive non-uniform gap closure and the resulting variations in residual layer reduces process yield or degrades product performance. This thesis develops reduced-order models for fluid flow and structural mechanics of the imprint process for NIL. Understanding key phenomena of gas trapping and residual layer non-uniformity drives model development to better understand how throughput and yield can be improved. Reynolds lubrication theory, the \textit{disperse} type of multiphase flow, and a lumped-parameter model of dissolution unite to produce a two-phase flow model for NIL simulations of 10,000 drops per cm2\text{cm}^2. Qualitative agreement between simulation and experiment provides a modicum of validation of this model for flow in NIL simulations. The two-phase model simulations predicts that both dissolution and viscous resistance affect throughput. The coupling of a reduced-order model for 3D structural mechanics with the two-phase flow model enables simulations of drop merger on a free-span tensioned web. Challenges in improving the structural model lead to formulation of a 2D model for which sources of instability are more easily discovered and understood. Inextensible cylindrical shell theory and lubrication theory combine into a model for the elastohydrodynamics of a rolling-imprint modality of NIL. Foil-bearing theory describes the lubrication layer that forms between a thin, tensioned web moving past another surface. Reproduction of the results of foil-bearing theory validates this coupled model and reveals a highly predictable region of uniformity that provides low shear stress conditions ideal for UV-cure. These results show theoretical limitations that are used to construct a processing window for predicting process feasibility

    Polyglot software development

    Get PDF
    The languages we choose to design solutions influence the way we think about the problem, the words we use in discussing it with colleagues, the processes we adopt in developing the software which should solve that problem. Therefore we should strive to use the best language possible for depicting each facet of the system. To do that we have to solve two challenges: i) first of all to understand merits and issues brought by the languages we could adopt and their long reaching effects on the organizations, ii) combine them wisely, trying to reduce the overhead due to their assembling. In the first part of this dissertation we study the adoption of modeling and domain specific languages. On the basis of an industrial survey we individuate a list of benefits attainable through these languages, how frequently they can be reached and which techniques permit to improve the chances to obtain a particular benefit. In the same way we study also the common problems which either prevent or hinder the adoption of these languages. We then analyze the processes through which these languages are employed, studying the relative frequency of the usage of the different techniques and the factors influencing it. Finally we present two case-studies performed in a small and in a very large company, with the intent of presenting the peculiarities of the adoption in different contexts. As consequence of adopting specialized languages, many of them have to be employed to represent the complete solution. Therefore in the second part of the thesis we focus on the integration of these languages. Being this topic really new we performed preliminary studies to first understand the phenomenon, studying the different ways through which languages interact and their effects on defectivity. Later we present some prototypal solutions for i) the automatic spotting of cross-language relations, ii) the design of language integration tool support in language workbenches through the exploitation of common meta-metamodeling. This thesis wants to offer a contribution towards the productive adoption of multiple, specific languages in the same software development project, hence polyglot software development. From this approach we should be able to reduce the complexity due to misrepresentation of solutions, offer a better facilities to think about problems and, finally to be able to solve more difficult problems with our limited brain resources. Our results consists in a better understanding of MDD and DSLs adoption in companies. From that we can derive guidelines for practitioners, lesson learned for deploying in companies, depending on the size of the company, and implications for other actors involved in the process: company management and universities. Regarding cross-language relations our contribution is an initial definition of the problem, supported by some empirical evidence to sustain its importance. The solutions we propose are not yet mature but we believe that from them future work can stem

    Expanding the Optical Capabilities of Germanium in the Infrared Range Through Group IV and III-V-IV Alloy Systems

    Get PDF
    abstract: The work described in this thesis explores the synthesis of new semiconductors in the Si-Ge-Sn system for application in Si-photonics. Direct gap Ge1-ySny (y=0.12-0.16) alloys with enhanced light emission and absorption are pursued. Monocrystalline layers are grown on Si platforms via epitaxy-driven reactions between Sn- and Ge-hydrides using compositionally graded buffer layers that mitigate lattice mismatch between the epilayer and Si platforms. Prototype p-i-n structures are fabricated and are found to exhibit direct gap electroluminescence and tunable absorption edges between 2200 and 2700 nm indicating applications in LEDs and detectors. Additionally, a low pressure technique is described producing pseudomorphic Ge1-ySny alloys in the compositional range y=0.06-0.17. Synthesis of these materials is achieved at ultra-low temperatures resulting in nearly defect-free films that far exceed the critical thicknesses predicted by thermodynamic considerations, and provide a chemically driven route toward materials with properties typically associated with molecular beam epitaxy. Silicon incorporation into Ge1-ySny yields a new class of Ge1-x-ySixSny (y>x) ternary alloys using reactions between Ge3H8, Si4H10, and SnD4. These materials contain small amounts of Si (x=0.05-0.08) and Sn contents of y=0.1-0.15. Photoluminescence studies indicate an intensity enhancement relative to materials with lower Sn contents (y=0.05-0.09). These materials may serve as thermally robust alternatives to Ge1-ySny for mid-infrared (IR) optoelectronic applications. An extension of the above work is the discovery of a new class of Ge-like Group III-V-IV hybrids with compositions Ga(As1–xPx)Ge3 (x=0.01-0.90) and (GaP)yGe5–2y related to Ge1-x-ySixSny in structure and properties. These materials are prepared by chemical vapor deposition of reactive Ga-hydrides with P(GeH3)3 and As(GeH3)3 custom precursors as the sources of P, As, and Ge incorporating isolated GaAs and GaP donor-acceptor pairs into diamond-like Ge-based structures. Photoluminescence studies reveal bandgaps in the near-IR and large bowing of the optical behavior relative to linear interpolation of the III-V and Ge end members. Similar materials in the Al-Sb-B-P system are also prepared and characterized. The common theme of the above topics is the design and fabrication of new optoelectronic materials that can be fully compatible with Si-based technologies for expanding the optoelectronic capabilities of Ge into the mid-IR and beyond through compositional tuning of the diamond lattice.Dissertation/ThesisDoctoral Dissertation Chemistry 201

    Characterisation of bipolar parasitic transistors for CMOS process control

    Get PDF
    corecore