283 research outputs found

    Design of Experiments for Screening

    Full text link
    The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared

    A comparison of design and model selection methods for supersaturated experiments

    No full text
    Various design and model selection methods are available for supersatu-rated designs having more factors than runs but little research is available ontheir comparison and evaluation. In this paper, simulated experiments areused to evaluate the use of E(s2)-optimal and Bayesian D-optimal designs,and to compare three analysis strategies representing regression, shrinkageand a novel model-averaging procedure. Suggestions are made for choosingthe values of the tuning constants for each approach. Findings include that(i) the preferred analysis is via shrinkage; (ii) designs with similar numbersof runs and factors can be effective for a considerable number of active effectsof only moderate size; and (iii) unbalanced designs can perform well. Somecomments are made on the performance of the design and analysis methodswhen effect sparsity does not hol

    Design and Analysis of Screening Experiments Assuming Effect Sparsity

    Get PDF
    Many initial experiments for industrial and engineering applications employ screening designs to determine which of possibly many factors are significant. These screening designs are usually a highly fractionated factorial or a Plackett-Burman design that focus on main effects and provide limited information for interactions. To help simplify the analysis of these experiments, it is customary to assume that only a few of the effects are actually important; this assumption is known as ‘effect sparsity’. This dissertation will explore both design and analysis aspects of screening experiments assuming effect sparsity. In 1989, Russell Lenth proposed a method for analyzing unreplicated factorials that has become popular due to its simplicity and satisfactory power relative to alternative methods. We propose and illustrate the use of p-values, estimated by simulation, for Lenth t-statistics. This approach is recommended for its versatility. Whereas tabulated critical values are restricted to the case of uncorrelated estimates, we illustrate the use of p-values for both orthogonal and nonorthogonal designs. For cases where there is limited replication, we suggest computing t-statistics and p-values using an estimator that combines the pure error mean square with a modified Lenth’s pseudo standard error. Supersaturated designs (SSDs) are designs that examine more factors than runs available. SSDs were introduced to handle situations in which a large number of factors are of interest but runs are expensive or time-consuming. We begin by assessing the null model performance of SSDs when using all-subsets and forward selection regression. The propensity for model selection criteria to overfit is highlighted. We subsequently propose a strategy for analyzing SSDs that combines all-subsets regression and permutation tests. The methods are illustrated for several examples. In contrast to the usual sequential nature of response surface methods (RSM), recent literature has proposed both screening and response surface exploration using only one three-level design. This approach is named “one-step RSM”. We discuss and illustrate two shortcomings of the current one-step RSM designs and analysis. Subsequently, we propose a new class of three-level designs and an analysis strategy unique to these designs that will address these shortcomings and aid the user in being appropriately advised as to factor importance. We illustrate the designs and analysis with simulated and real data

    A Comparison Study of Second-Order Screening Designs and Their Extension

    Get PDF
    Recent literature has proposed employing a single experimental design capable of preforming both factor screening and response surface estimation when conducting sequential experiments is unrealistic due to time, budget, or other constraints. Military systems, particularly aerodynamic systems, are complex. It is not unusual for these systems to exhibit nonlinear response behavior. Developmental testing may be tasked to characterize the nonlinear behavior of such systems while being restricted in how much testing can be accomplished. Second-order screening designs provide a means in a single design experiment to effectively focus test resources onto those factors driving system performance. Sponsored by the Office of the Secretary of Defense (ODS) in support of the Science of Test initiative, this research characterizes and adds to the area of second-order screening designs, particularly as applied to defense testing. Existing design methods are empirically tested and examined for robustness. The leading design method, a method that is very run efficient, is extended to overcome limitations when screening for non-linear effects. A case study and screening design guidance for defense testers is also provided

    Recent Developments in Nonregular Fractional Factorial Designs

    Full text link
    Nonregular fractional factorial designs such as Plackett-Burman designs and other orthogonal arrays are widely used in various screening experiments for their run size economy and flexibility. The traditional analysis focuses on main effects only. Hamada and Wu (1992) went beyond the traditional approach and proposed an analysis strategy to demonstrate that some interactions could be entertained and estimated beyond a few significant main effects. Their groundbreaking work stimulated much of the recent developments in design criterion creation, construction and analysis of nonregular designs. This paper reviews important developments in optimality criteria and comparison, including projection properties, generalized resolution, various generalized minimum aberration criteria, optimality results, construction methods and analysis strategies for nonregular designs.Comment: Submitted to the Statistics Surveys (http://www.i-journals.org/ss/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A method for augmenting supersaturated designs

    Get PDF
    Initial screening experiments often leave some problems unresolved, adding follow-up runs is needed to clarify the initial results. In this paper, a technique is developed to add additional experimental runs to an initial supersaturated design. The added runs are generated with respect to the Bayesian DsD_s-optimality criterion and the procedure can incorporate the model information from the initial design. After analysis of the initial experiment with several methods, factors are classified into three groups: primary, secondary, and potential according to the times that they have been identified. The focus is on those secondary factors since they have been identified several times but not so many that experimenters are sure that they are active, the proposed Bayesian DsD_s-optimal augmented design would minimize the error variances of the parameter estimators of secondary factors. In addition, a blocking factor will be involved to describe the mean shift between two stages. Simulation results show that the method performs very well in certain settings

    LASSO-OPTIMAL SUPERSATURATED DESIGN AND ANALYSIS FOR FACTOR SCREENING IN SIMULATION EXPERIMENTS

    Get PDF
    Complex systems such as large-scale computer simulation models typically involve a large number of factors. When investigating such a system, screening experiments are often used to sift through these factors to identify a subgroup of factors that most significantly influence the interested response

    Considerations for Screening Designs and Follow-Up Experimentation

    Get PDF
    The success of screening experiments hinges on the effect sparsity assumption, which states that only a few of the factorial effects of interest actually have an impact on the system being investigated. The development of a screening methodology to harness this assumption requires careful consideration of the strengths and weaknesses of a proposed experimental design in addition to the ability of an analysis procedure to properly detect the major influences on the response. However, for the most part, screening designs and their complementing analysis procedures have been proposed separately in the literature without clear consideration of their ability to perform as a single screening methodology. As a contribution to this growing area of research, this dissertation investigates the pairing of non-replicated and partially–replicated two-level screening designs with model selection procedures that allow for the incorporation of a model-independent error estimate. Using simulation, we focus attention on the ability to screen out active effects from a first order with two-factor interactions model and the possible benefits of using partial replication as part of an overall screening methodology. We begin with a focus on single-criterion optimum designs and propose a new criterion to create partially replicated screening designs. We then extend the newly proposed criterion into a multi-criterion framework where estimation of the assumed model in addition to protection against model misspecification are considered. This is an important extension of the work since initial knowledge of the system under investigation is considered to be poor in the cases presented. A methodology to reduce a set of competing design choices is also investigated using visual inspection of plots meant to represent uncertainty in design criterion preferences. Because screening methods typically involve sequential experimentation, we present a final investigation into the screening process by presenting simulation results which incorporate a single follow-up phase of experimentation. In this concluding work we extend the newly proposed criterion to create optimal partially replicated follow-up designs. Methodologies are compared which use different methods of incorporating knowledge gathered from the initial screening phase into the follow-up phase of experimentation

    Logical analysis of sample pooling for qualitative analytical testing

    Get PDF
    When the prevalence of positive samples in a whole population is low, the pooling of samples to detect them has been widely used for epidemic control. However, its usefulness for applying analytical screening procedures in food safety (microbiological or allergen control), fraud detection or environmental monitoring is also evident. The expected number of tests per individual sample that is necessary to identify all ‘positives’ is a measure of the efficiency of a sample pooling strategy. Reducing this figure is key to an effective use of available resources in environmental control and food safety. This reduction becomes critical when the availability of analytical tests is limited, as the SARS-CoV-2 pandemic showed. The outcome of the qualitative analytical test is binary. Therefore, the operation governing the outcome of the pooled samples is not an algebraic sum of the individual results but the logical operator (‘or’ in natural language). Consequently, the problem of using pooled samples to identify positive samples naturally leads to proposing a system of logical equations. Therefore, this work suggests a new strategy of sample pooling based on: i) A half-fraction of a Placket-Burman design to make the pooled samples and ii) The logical resolution, not numerical, to identify the positive samples from the outcomes of the analysis of the pooled samples. For a prevalence of ‘positive’ equal to 0.05 and 10 original samples to be pooled, the algorithm presented here results in an expected value per individual equal to 0.37, meaning a 63% reduction in the expected number of tests per individual sample. With sensitivities and specificities of the analytical test ranging from 0.90 to 0.99, the expected number of tests per individual ranges from 0.332 to 0.416, always higher than other pooled testing algorithms. In addition, the accuracy of the algorithm proposed is better or similar to that of other published algorithms, with an expected number of hits ranging from 99.16 to 99.90%. The procedure is applied to the detection of food samples contaminated with a pathogen (Listeria monocytogenes) and others contaminated with an allergen (Pistachio) by means of Polymerase Chain Reaction, PCR, test.This work was supported by Consejería de Educación de la Junta de Castilla y León through project BU052P20 co-financed with European Regional Development Funds. The authors thank Dr. Laura Rubio for applying the double-blind protocol to dope the samples and AGROLAB S.L.U, Burgos (Spain) for the careful preparation of the pooled samples
    corecore