291 research outputs found
Metamodel variability analysis combining bootstrapping and validation techniques
Research on metamodel-based optimization has received considerably increasing interest in recent years, and has found successful applications in solving computationally expensive problems. The joint use of computer simulation experiments and metamodels introduces a source of uncertainty that we refer to as metamodel variability. To analyze and quantify this variability, we apply bootstrapping to residuals derived as prediction errors computed from cross-validation. The proposed method can be used with different types of metamodels, especially when limited knowledge on parameters’ distribution is available or when a limited computational budget is allowed. Our preliminary experiments based on the robust version of
the EOQ model show encouraging results
Kriging Metamodeling in Simulation: A Review
This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic one-shot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs. It ends with topics for future research.Kriging;Metamodel;Response Surface;Interpolation;Design
White Noise Assumptions Revisited: Regression Models and Statistical Designs for Simulation Practice
Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these classic assumptions in simulation practice?(ii) How can these assumptions be tested? (iii) If assumptions are violated, can the simulation's I/O data be transformed such that the assumptions hold?(iv) If not, which alternative statistical methods can then be applied?metamodels;experimental designs;generalized least squares;multivariate analysis;normality;jackknife;bootstrap;heteroscedasticity;common random numbers;validation
Design of Experiments: An Overview
Design Of Experiments (DOE) is needed for experiments with real-life systems, and with either deterministic or random simulation models. This contribution discusses the different types of DOE for these three domains, but focusses on random simulation. DOE may have two goals: sensitivity analysis including factor screening and optimization. This contribution starts with classic DOE including 2k-p and Central Composite designs. Next, it discusses factor screening through Sequential Bifurcation. Then it discusses Kriging including Latin Hyper cube Sampling and sequential designs. It ends with optimization through Generalized Response Surface Methodology and Kriging combined with Mathematical Programming, including Taguchian robust optimization.simulation;sensitivity analysis;optimization;factor screening;Kriging;RSM;Taguchi
Customized Sequential Designs for Random Simulation Experiments: Kriging Metamodelling and Bootstrapping
This paper proposes a novel method to select an experimental design for interpolation in random simulation.(Though the paper focuses on Kriging, this method may also apply to other types of metamodels such as linear regression models.)Assuming that simulation requires much computer time, it is important to select a design with a small number of observations (or simulation runs).The proposed method is therefore sequential.Its novelty is that it accounts for the specific input/output behavior (or response function) of the particular simulation at hand; i.e., the method is customized or application-driven.A tool for this customization is bootstrapping, which enables the estimation of the variances of predictions for inputs not yet simulated.The new method is tested through the classic M/M/1 queueing simulation.For this simulation the novel design indeed gives better results than a Latin Hypercube Sampling (LHS) with a prefixed sample of the same size.simulation;statistical methods;bootstrap
Simulation Experiments in Practice: Statistical Design and Regression Analysis
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independently distributed with a constant variance; moreover, the regression (meta)model of the simulation model’s I/O behaviour is assumed to have residuals with zero means. This article addresses the following practical questions: (i) How realistic are these assumptions, in practice? (ii) How can these assumptions be tested? (iii) If assumptions are violated, can the simulation's I/O data be transformed such that the assumptions do hold? (iv) If not, which alternative statistical methods can then be applied?metamodel;experimental design;jackknife;bootstrap;common random numbers;validation
Simulation Experiments in Practice: Statistical Design and Regression Analysis
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic theory assumes a single simulation response that is normally and independently distributed with a constant variance; moreover, the regression (meta)model of the simulation model’s I/O behaviour is assumed to have residuals with zero means. This article addresses the following questions: (i) How realistic are these assumptions, in practice? (ii) How can these assumptions be tested? (iii) If assumptions are violated, can the simulation's I/O data be transformed such that the assumptions do hold? (iv) If not, which alternative statistical methods can then be applied?metamodels;experimental designs;generalized least squares;multivariate analysis;normality;jackknife;bootstrap;heteroscedasticity;common random numbers;validation
Sensitivity Analysis of Simulation Models
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial metamodels, resolution-IV and resolution-V designs for metamodels augmented with two-factor interactions, and designs for second-degree polynomial metamodels including central composite designs. It also reviews factor screening for simulation models with very many factors, focusing on the so-called "sequential bifurcation" method. Furthermore, it reviews Kriging metamodels and their designs. It mentions that sensitivity analysis may also aim at the optimization of the simulated system, allowing multiple random simulation outputs.simulation;sensitivity analysis;gradients;screening;Kriging;optimization;Response SurfaceMethodology;Taguchi
Constrained optimization in simulation: a novel approach.
This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations, (ii) Kriging (also called spatial correlation modeling) to analyze the global simulation input/output data that result from this experimental design, and (iii) integer nonlinear programming to estimate the optimal solution from the Kriging metamodels. The heuristic is applied to an (s, S) inventory system and a realistic call-center simulation model, and compared with the popular commercial heuristic OptQuest embedded in the ARENA versions 11 and 12. These two applications show that the novel heuristic outperforms OptQuest in terms of search speed (it moves faster towards high-quality solutions) and consistency of the solution quality.
Design and Analysis of Monte Carlo Experiments
monte carlo experiments;simulation models;mathematical analysis;sensitivity analysis;experimental design
- …