78 research outputs found

    Design of Experiments for Screening

    Full text link
    The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared

    Regularities in the Augmentation of Fractional Factorial Designs

    Get PDF
    Two-level factorial experiments are widely used in experimental design because they are simple to construct and interpret while also being efficient. However, full factorial designs for many factors can quickly become inefficient, time consuming, or expensive and therefore fractional factorial designs are sometimes preferable since they provide information on effects of interest and can be performed in fewer experimental runs. The disadvantage of using these designs is that when using fewer experimental runs, information about effects of interest is sometimes lost. Although there are methods for selecting fractional designs so that the number of runs is minimized while the amount of information provided is maximized, sometimes the design must be augmented with a follow-up experiment to resolve ambiguities. Using a fractional factorial design augmented with an optimal follow-up design allows for many factors to be studied using only a small number of additional experimental runs, compared to the full factorial design, without a loss in the amount of information that can be gained about the effects of interest. This thesis looks at discovering regularities in the number of follow-up runs that are needed to estimate all aliased effects in the model of interest for 4-, 5-, 6-, and 7-factor resolution III and IV fractional factorial experiments. From this research it was determined that for all of the resolution IV designs, four or fewer (typically three) augmented runs would estimate all of the aliased effects in the model of interest. In comparison, all of the resolution III designs required seven or eight follow-up runs to estimate all of the aliased effects of interest. It was determined that D-optimal follow-up experiments were significantly better with respect to run size economy versus fold-over and semi-foldover designs for (i) resolution IV designs and (ii) designs with larger run sizes

    Economic Trend Resistant2n-(n-k) Designs of Resolutions III and IV Based on Hadamard Matrices

    Get PDF
    This article utilizes the Normalized Sylvester-Hadamard Matrices  of size 2kx2kand their associated  saturated orthogonal arrays OA(2k, 2k - 1, 2, 2) topropose analgorithmbased on factor projection (Backward/Forward) for the construction of three systematic run-after-run2n-(n-k) fractional factorial designs: (i) minimum cost  trend free 2n-(n-k)designsof resolution III (2k-1≤n≤2k– 1 – k)by backward factor deletion (ii) minimum cost trend free 2n-(n-k) designsof resolution III (k+1≤n≤ 2k-1– 2+k ) by forward factor addition (iii) minimum costtrend free  2n-(n-k) designsof resolution IV (2k-2≤n≤2k-1-2) ,where each 2n-(n-k)design is economic minimizing the number of factor level changes between the 2ksuccessive runs and allows for the estimation of all factor main effects unbiased by the linear time trend,which might be present in the 2ksequentially generated responses. The article gives for each 2n-(n-k)design: (i) the defining contrast displaying the design’s alias structure(ii) the k independent generators for sequencingthe design’s  2n-(n-k) runs  by the Generalized Fold over Scheme and (ii) the minimum total cost of factor level changes between the 2n-(n-k) runs of the design. Proposed designs compete well with existing systematic2n-(n-k)designs (of either resolution) in minimizing the experimental costandin securing factors’ resistance to the non-negligible time trend. Keywords: Sequential fractional factorial experimentation; Time trend free systematic run orders; Generalizedfoldover scheme for sequencing experimentalruns; The total cost of factor level changes between successive runs; The Normalized Sylvester –Hadamard Matrices; Orthogonal Arrays and factor projection; Design resolution and the alias structure. DOI: 10.7176/JEP/11-25-05 Publication date:September 30th 202

    Tailoring the Statistical Experimental Design Process for LVC Experiments

    Get PDF
    The use of Live, Virtual and Constructive (LVC) Simulation environments are increasingly being examined for potential analytical use particularly in test and evaluation. The LVC simulation environments provide a mechanism for conducting joint mission testing and system of systems testing when scale and resource limitations prevent the accumulation of the necessary density and diversity of assets required for these complex and comprehensive tests. The statistical experimental design process is re-examined for potential application to LVC experiments and several additional considerations are identified to augment the experimental design process for use with LVC. This augmented statistical experimental design process is demonstrated by a case study involving a series of tests on an experimental data link for strike aircraft using LVC simulation for the test environment. The goal of these tests is to assess the usefulness of information being presented to aircrew members via different datalink capabilities. The statistical experimental design process is used to structure the experiment leading to the discovery of faulty assumptions and planning mistakes that could potentially wreck the results of the experiment. Lastly, an aggressive sequential experimentation strategy is presented for LVC experiments when test resources are limited. This strategy depends on a foldover algorithm that we developed for nearly orthogonal arrays to rescue LVC experiments when important factor effects are confounded

    Regression Models and Experimental Designs: A Tutorial for Simulation Analaysts

    Get PDF
    This tutorial explains the basics of linear regression models. especially low-order polynomials. and the corresponding statistical designs. namely, designs of resolution III, IV, V, and Central Composite Designs (CCDs).This tutorial assumes 'white noise', which means that the residuals of the fitted linear regression model are normally, independently, and identically distributed with zero mean.The tutorial gathers statistical results that are scattered throughout the literature on mathematical statistics, and presents these results in a form that is understandable to simulation analysts.metamodels;fractional factorial designs;Plackett-Burman designs;factor interactions;validation;cross-validation

    Experimental Design for Sensitivity Analysis, Optimization and Validation of Simulation Models

    Get PDF
    This chapter gives a survey on the use of statistical designs for what-if analysis in simula- tion, including sensitivity analysis, optimization, and validation/verification. Sensitivity analysis is divided into two phases. The first phase is a pilot stage, which consists of screening or searching for the important factors among (say) hundreds of potentially important factors. A novel screening technique is presented, namely sequential bifurcation. The second phase uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as a metamodel or a response surface. Regression analysis gives better results when the simu- lation experiment is well designed, using either classical statistical designs (such as frac- tional factorials) or optimal designs (such as pioneered by Fedorov, Kiefer, and Wolfo- witz). To optimize the simulated system, the analysts may apply Response Surface Metho- dology (RSM); RSM combines regression analysis, statistical designs, and steepest-ascent hill-climbing. To validate a simulation model, again regression analysis and statistical designs may be applied. Several numerical examples and case-studies illustrate how statisti- cal techniques can reduce the ad hoc character of simulation; that is, these statistical techniques can make simulation studies give more general results, in less time. Appendix 1 summarizes confidence intervals for expected values, proportions, and quantiles, in termi- nating and steady-state simulations. Appendix 2 gives details on four variance reduction techniques, namely common pseudorandom numbers, antithetic numbers, control variates or regression sampling, and importance sampling. Appendix 3 describes jackknifing, which may give robust confidence intervals.least squares;distribution-free;non-parametric;stopping rule;run-length;Von Neumann;median;seed;likelihood ratio

    Identifying the important factors in simulation models with many factors

    Get PDF
    Simulation models may have many parameters and input variables (together called factors), while only a few factors are really important (parsimony principle). For such models this paper presents an effective and efficient screening technique to identify and estimate those important factors. The technique extends the classical binary search technique to situations with more than a single important factor. The technique uses a low-order polynomial approximation to the input/output behavior of the simulation model. This approximation may account for interactions among factors. The technique is demonstrated by applying it to a complicated ecological simulation that models the increase of temperatures worldwide.Simulation Models;econometrics

    Recent Developments in Nonregular Fractional Factorial Designs

    Full text link
    Nonregular fractional factorial designs such as Plackett-Burman designs and other orthogonal arrays are widely used in various screening experiments for their run size economy and flexibility. The traditional analysis focuses on main effects only. Hamada and Wu (1992) went beyond the traditional approach and proposed an analysis strategy to demonstrate that some interactions could be entertained and estimated beyond a few significant main effects. Their groundbreaking work stimulated much of the recent developments in design criterion creation, construction and analysis of nonregular designs. This paper reviews important developments in optimality criteria and comparison, including projection properties, generalized resolution, various generalized minimum aberration criteria, optimality results, construction methods and analysis strategies for nonregular designs.Comment: Submitted to the Statistics Surveys (http://www.i-journals.org/ss/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Discriminating Between Optimal Follow-Up Designs

    Get PDF
    Sequential experimentation is often employed in process optimization wherein a series of small experiments are run successively in order to determine which experimental factor levels are likely to yield a desirable response. Although there currently exists a framework for identifying optimal follow-up designs after an initial experiment has been run, the accepted methods frequently point to multiple designs leaving the practitioner to choose one arbitrarily. In this thesis, we apply preposterior analysis and Bayesian model-averaging to develop a methodology for further discriminating between optimal follow-up designs while controlling for both parameter and model uncertainty
    • …
    corecore