121 research outputs found

    Analysis of simulation with common random numbers

    No full text

    Analyzing Simulation Experiments with Common Random Numbers

    No full text
    To analyze simulation runs which use the same random numbers, the blocking concept of experimental design is not needed. Instead, this paper applies a linear regression model with a nondiagonal covariance matrix. This covariance matrix does not need to have a specific pattern such as constant covariances. A simple example yields surprising results. The paper proposes a new framework for the error analysis. This framework consists of three factors (namely, common random numbers, replication, model validity), each with three levels.blocking, variance reduction, estimated generalized least squares, general linear model, error analysis

    EXPERIMENTAL DESIGNS FOR SENSITIVITY ANALYSIS OF SIMULATION MODELS

    Get PDF
    This introductory tutorial gives a survey on the use of statistical designs for what-if or sensitivity analysis in simulation. This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as metamodel, response surface, compact model, emulator, etc. Regression analysis gives better results when the simulation experiment is well designed, using classical statistical designs (such as fractional k- p factorials, including 2 designs). These statistical techniques reduce the ad hoc character of simula-tion; that is, these techniques can make simulation studies give more general results, in less time

    Risk analysis and sensitivity analysis

    No full text

    Regression Metamodels for Simulation with Common Random Numbers: Comparison of Validation Tests and Confidence Intervals

    No full text
    Linear regression analysis is important in many fields. In the analysis of simulation results, a regression (meta)model can be applied, even when common pseudorandom numbers are used. To test the validity of the specified regression model, Rao (1959) generalized the F statistic for lack of fit, whereas Kleijnen (1983) proposed a cross-validation procedure using a Student's t statistic combined with Bonferroni's inequality. This paper reports on an extensive Monte Carlo experiment designed to compare these two methods. Under the normality assumption, cross-validation is conservative, whereas Rao's test realizes its nominal type I error and has high power. Robustness is investigated through lognormal and uniform distributions. When simulation responses are distributed lognormally, then cross-validation using Ordinary Least Squares is the only technique that has acceptable type I error. Uniform distributions give results similar to the normal case. Once the regression model is validated, confidence intervals for the individual regression parameters are computed. The Monte Carlo experiment compares several confidence interval procedures. Under normality, Rao's procedure is preferred since it has good coverage probability and acceptable half-length. Under lognormality, Ordinary Least Squares achieves nominal coverage probability. Uniform distributions again give results similar to the normal case.common seeds, metamodeling, specification error, hotelling's statistic, experimental design
    • …
    corecore