12,614 research outputs found

    Customized Sequential Designs for Random Simulation Experiments: Kriging Metamodelling and Bootstrapping

    Get PDF
    This paper proposes a novel method to select an experimental design for interpolation in random simulation.(Though the paper focuses on Kriging, this method may also apply to other types of metamodels such as linear regression models.)Assuming that simulation requires much computer time, it is important to select a design with a small number of observations (or simulation runs).The proposed method is therefore sequential.Its novelty is that it accounts for the specific input/output behavior (or response function) of the particular simulation at hand; i.e., the method is customized or application-driven.A tool for this customization is bootstrapping, which enables the estimation of the variances of predictions for inputs not yet simulated.The new method is tested through the classic M/M/1 queueing simulation.For this simulation the novel design indeed gives better results than a Latin Hypercube Sampling (LHS) with a prefixed sample of the same size.simulation;statistical methods;bootstrap

    Multi-Information Source Fusion and Optimization to Realize ICME: Application to Dual Phase Materials

    Get PDF
    Integrated Computational Materials Engineering (ICME) calls for the integration of computational tools into the materials and parts development cycle, while the Materials Genome Initiative (MGI) calls for the acceleration of the materials development cycle through the combination of experiments, simulation, and data. As they stand, both ICME and MGI do not prescribe how to achieve the necessary tool integration or how to efficiently exploit the computational tools, in combination with experiments, to accelerate the development of new materials and materials systems. This paper addresses the first issue by putting forward a framework for the fusion of information that exploits correlations among sources/models and between the sources and `ground truth'. The second issue is addressed through a multi-information source optimization framework that identifies, given current knowledge, the next best information source to query and where in the input space to query it via a novel value-gradient policy. The querying decision takes into account the ability to learn correlations between information sources, the resource cost of querying an information source, and what a query is expected to provide in terms of improvement over the current state. The framework is demonstrated on the optimization of a dual-phase steel to maximize its strength-normalized strain hardening rate. The ground truth is represented by a microstructure-based finite element model while three low fidelity information sources---i.e. reduced order models---based on different homogenization assumptions---isostrain, isostress and isowork---are used to efficiently and optimally query the materials design space.Comment: 19 pages, 11 figures, 5 table

    Design of Experiments: An Overview

    Get PDF
    Design Of Experiments (DOE) is needed for experiments with real-life systems, and with either deterministic or random simulation models. This contribution discusses the different types of DOE for these three domains, but focusses on random simulation. DOE may have two goals: sensitivity analysis including factor screening and optimization. This contribution starts with classic DOE including 2k-p and Central Composite designs. Next, it discusses factor screening through Sequential Bifurcation. Then it discusses Kriging including Latin Hyper cube Sampling and sequential designs. It ends with optimization through Generalized Response Surface Methodology and Kriging combined with Mathematical Programming, including Taguchian robust optimization.simulation;sensitivity analysis;optimization;factor screening;Kriging;RSM;Taguchi

    Application-driven Sequential Designs for Simulation Experiments: Kriging Metamodeling

    Get PDF
    This paper proposes a novel method to select an experimental design for interpolation in simulation.Though the paper focuses on Kriging in deterministic simulation, the method also applies to other types of metamodels (besides Kriging), and to stochastic simulation.The paper focuses on simulations that require much computer time, so it is important to select a design with a small number of observations.The proposed method is therefore sequential.The novelty of the method is that it accounts for the specific input/output function of the particular simulation model at hand; i.e., the method is application-driven or customized.This customization is achieved through cross-validation and jackknifing.The new method is tested through two academic applications, which demonstrate that the method indeed gives better results than a design with a prefixed sample size.experimental design;simulation;interpolation;sampling;sensitivity analysis;metamodels

    Generating cycle time-throughput curves using effective process time based aggregate modeling

    Get PDF
    In semiconductor manufacturing, cycle time-throughput(CT-TH) curves are often used for planning purposes.To generate CT-TH curves, detailed simulation models or analytical queueing approximations may be used. Detailed models require much development time and computational effort. On the other hand, analytical models, such as the popular closed-form G/G/m queueing expression, may not be sufficiently accurate, inparticular for integrated processing equipment that have wafers of more than one lot in process. Recently, an aggregate simulation model representation ofworkstations with integrated processing equipment has been proposed. This aggregate model is a G=G=m type of system with a workload-dependent process time distribution, which is obtained from lot arrival and departure events. This paper presents a first proof of concept of the method in semiconductor practice. We develop the required extensions to generate CT-THcurves for workstations in a semiconductor manufacturing environment where usually only a limited amount of arrival and departure data is available. We present a simulation and an industry case to illustrate the proposed method

    Kriging Metamodeling in Simulation: A Review

    Get PDF
    This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic one-shot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs. It ends with topics for future research.Kriging;Metamodel;Response Surface;Interpolation;Design

    A Kriging Method for Modeling Cycle Time-Throughput Profiles in Manufacturing

    Get PDF
    In semiconductor manufacturing, the steady-state behavior of a wafer fab system can be characterized by its cycle time-throughput profiles. These profiles quantify the relationship between the cycle time of a product and the system throughput and product mix. The objective of this work is to efficiently generate such cycle time-throughput profiles in manufacturing which can further assist decision makings in production planning.;In this research, a metamodeling approach based on Stochastic Kriging model with Qualitative factors (SKQ) has been adopted to quantify the target relationship of interest. Furthermore, a sequential experimental design procedure is developed to improve the efficiency of simulation experiments. For the initial design, a Sequential Conditional Maximin algorithm is utilized. Regarding the follow-up designs, batches of design points are determined using a Particle Swarm Optimization algorithm.;The procedure is applied to a Jackson network, as well as a scale-down wafer fab system. In both examples, the prediction performance of the SKQ model is promising. It is also shown that the SKQ model provides narrower confidence intervals compared to the Stochastic Kriging model (SK) by pooling the information of the qualitative variables

    Aggregate modeling in semiconductor manufacturing using effective process times

    Get PDF
    In modern manufacturing, model-based performance analysis is becoming increasingly important due to growing competition and high capital investments. In this PhD project, the performance of a manufacturing system is considered in the sense of throughput (number of products produced per time unit), cycle time (time that a product spends in a manufacturing system), and the amount of work in process (amount of products in the system). The focus of this project is on semiconductor manufacturing. Models facilitate in performance improvement by providing a systematic connection between operational decisions and performance measures. Two common model types are analytical models, and discrete-event simulation models. Analytical models are fast to evaluate, though incorporation of all relevant factory-fl oor aspects is difficult. Discrete-event simulation models allow for the inclusion of almost any factory-fl oor aspect, such that a high prediction accuracy can be achieved. However, this comes at the cost of long computation times. Furthermore, data on all the modeled aspects may not be available. The number of factory-fl oor aspects that have to be modeled explicitly can be reduced signiffcantly through aggregation. In this dissertation, simple aggregate analytical or discrete-event simulation models are considered, with only a few parameters such as the mean and the coeffcient of variation of an aggregated process time distribution. The aggregate process time lumps together all the relevant aspects of the considered system, and is referred to as the Effective Process Time (EPT) in this dissertation. The EPT may be calculated from the raw process time and the outage delays, such as machine breakdown and setup. However, data on all the outages is often not available. This motivated previous research at the TU/e to develop algorithms which can determine the EPT distribution directly from arrival and departure times, without quantifying the contributing factors. Typical for semiconductor machines is that they often perform a sequence of processes in the various machine chambers, such that wafers of multiple lots are in process at the same time. This is referred to as \lot cascading". To model this cascading behavior, in previous work at the TU/e an aggregate model was developed in which the EPT depends on the amount of Work In Process (WIP). This model serves as the starting point of this dissertation. This dissertation presents the efforts to further develop EPT-based aggregate modeling for application in semiconductor manufacturing. In particular, the dissertation contributes to: dealing with the typically limited amount of available data, modeling workstations with a variable product mix, predicting cycle time distributions, and aggregate modeling of networks of workstations. First, the existing aggregate model with WIP-dependent EPTs has been extended with a curve-fitting approach to deal with the limited amount of arrivals and departures that can be collected in a realistic time period. The new method is illustrated for four operational semiconductor workstations in the Crolles2 semiconductor factory (in Crolles, France), for which the mean cycle time as a function of the throughput has been predicted. Second, a new EPT-based aggregate model that predicts the mean cycle time of a workstation as a function of the throughput, and the product mix has been developed. In semiconductor manufacturing, many workstations produce a mix of different products, and each machine in the workstation may be qualified to process a subset of these products only. The EPT model is validated on a simulation case, and on an industry case of an operational Crolles2 workstation. Third, the dissertation presents a new EPT-based aggregate model that can predict the cycle time distribution of a workstation instead of only the mean cycle time. To accurately predict a cycle time distribution, the order in which lots are processed is incorporated in the aggregate model by means of an overtaking distribution. An extensive simulation study and an industry case demonstrate that the aggregate model can accurately predict the cycle time distribution of integrated processing workstations in semiconductor manufacturing. Finally, aggregate modeling of networks of semiconductor workstations has been explored. Two modeling approaches are investigated: the entire network is modeled as a single aggregate server, and the network is modeled as an aggregate network that consists of an aggregate model for each workstation. The accuracy of the model predictions using the two approaches is investigated by means of a simulation case of a re-entrant ow line. The results of these aggregate models are promising
    • …
    corecore