15 research outputs found
Recommended from our members
Minimum aberration designs for discrete choice experiments
A discrete choice experiment (DCE) is a survey method that givesinsight into individual preferences for particular attributes.Traditionally, methods for constructing DCEs focus on identifyingthe individual effect of each attribute (a main effect). However, aninteraction effect between two attributes (a two-factor interaction)better represents real-life trade-offs, and provides us a better understandingof subjectsâ competing preferences. In practice it is oftenunknown which two-factor interactions are significant. To address theuncertainty, we propose the use of minimum aberration blockeddesigns to construct DCEs. Such designs maximize the number ofmodels with estimable two-factor interactions in a DCE with two-levelattributes. We further extend the minimum aberration criteria toDCEs with mixed-level attributes and develop some general theoreticalresults
Recent Developments in Nonregular Fractional Factorial Designs
Nonregular fractional factorial designs such as Plackett-Burman designs and
other orthogonal arrays are widely used in various screening experiments for
their run size economy and flexibility. The traditional analysis focuses on
main effects only. Hamada and Wu (1992) went beyond the traditional approach
and proposed an analysis strategy to demonstrate that some interactions could
be entertained and estimated beyond a few significant main effects. Their
groundbreaking work stimulated much of the recent developments in design
criterion creation, construction and analysis of nonregular designs. This paper
reviews important developments in optimality criteria and comparison, including
projection properties, generalized resolution, various generalized minimum
aberration criteria, optimality results, construction methods and analysis
strategies for nonregular designs.Comment: Submitted to the Statistics Surveys (http://www.i-journals.org/ss/)
by the Institute of Mathematical Statistics (http://www.imstat.org
Design of Experiments for Screening
The aim of this paper is to review methods of designing screening
experiments, ranging from designs originally developed for physical experiments
to those especially tailored to experiments on numerical models. The strengths
and weaknesses of the various designs for screening variables in numerical
models are discussed. First, classes of factorial designs for experiments to
estimate main effects and interactions through a linear statistical model are
described, specifically regular and nonregular fractional factorial designs,
supersaturated designs and systematic fractional replicate designs. Generic
issues of aliasing, bias and cancellation of factorial effects are discussed.
Second, group screening experiments are considered including factorial group
screening and sequential bifurcation. Third, random sampling plans are
discussed including Latin hypercube sampling and sampling plans to estimate
elementary effects. Fourth, a variety of modelling methods commonly employed
with screening designs are briefly described. Finally, a novel study
demonstrates six screening methods on two frequently-used exemplars, and their
performances are compared
Tailoring the Statistical Experimental Design Process for LVC Experiments
The use of Live, Virtual and Constructive (LVC) Simulation environments are increasingly being examined for potential analytical use particularly in test and evaluation. The LVC simulation environments provide a mechanism for conducting joint mission testing and system of systems testing when scale and resource limitations prevent the accumulation of the necessary density and diversity of assets required for these complex and comprehensive tests. The statistical experimental design process is re-examined for potential application to LVC experiments and several additional considerations are identified to augment the experimental design process for use with LVC. This augmented statistical experimental design process is demonstrated by a case study involving a series of tests on an experimental data link for strike aircraft using LVC simulation for the test environment. The goal of these tests is to assess the usefulness of information being presented to aircrew members via different datalink capabilities. The statistical experimental design process is used to structure the experiment leading to the discovery of faulty assumptions and planning mistakes that could potentially wreck the results of the experiment. Lastly, an aggressive sequential experimentation strategy is presented for LVC experiments when test resources are limited. This strategy depends on a foldover algorithm that we developed for nearly orthogonal arrays to rescue LVC experiments when important factor effects are confounded
UTILIZING DESIGN STRUCTURE FOR IMPROVING DESIGN SELECTION AND ANALYSIS
Recent work has shown that the structure for design plays a role in the simplicity or complexity of data analysis. To increase the knowledge of research in these areas, this dissertation aims to utilize design structure for improving design selection and analysis. In this regard, minimal dependent sets and block diagonal structure are both important concepts that are relevant to the orthogonality of the columns of a design. We are interested in finding ways to improve the data analysis especially for active effect detection by utilizing minimal dependent sets and block diagonal structure for design.
We introduce a new classification criterion for minimal dependent sets to enhance existing criteria for design selection. The block diagonal structure of certain nonregular designs will also be discussed as a means of improving model selection. In addition, the block diagonal structure and the concept of parallel flats will be utilized to construct three-quarter nonregular designs.
Based on the literature review on the effectiveness of the simulation study for slight the light on the success or failure of the proposed statistical method, in this dissertation, simulation studies were used to evaluate the efficacy of our proposed methods. The simulation results show that the minimal dependent sets can be used as a design selection criterion, and block-diagonal structure can also help to produce an effective model selection procedure. In addition, we found a strategy for constructing three-quarters of nonregular designs which depend on the orthogonality of the design columns. The results indicate that the structure of the design has an impact on developing data analysis and design selections. On this basis, it is recommended that analysts consider the structure of the design as a key factor in order to improve the analysis. Further research is needed to determine more concepts related to the structure of the design, which could help to improve data analysis
Quarter-fraction factorial designs constructed via quaternary codes
The research of developing a general methodology for the construction of good
nonregular designs has been very active in the last decade. Recent research by
Xu and Wong [Statist. Sinica 17 (2007) 1191--1213] suggested a new class of
nonregular designs constructed from quaternary codes. This paper explores the
properties and uses of quaternary codes toward the construction of
quarter-fraction nonregular designs. Some theoretical results are obtained
regarding the aliasing structure of such designs. Optimal designs are
constructed under the maximum resolution, minimum aberration and maximum
projectivity criteria. These designs often have larger generalized resolution
and larger projectivity than regular designs of the same size. It is further
shown that some of these designs have generalized minimum aberration and
maximum projectivity among all possible designs.Comment: Published in at http://dx.doi.org/10.1214/08-AOS656 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Aberration in qualitative multilevel designs
Generalized Word Length Pattern (GWLP) is an important and widely-used tool
for comparing fractional factorial designs. We consider qualitative factors,
and we code their levels using the roots of the unity. We write the GWLP of a
fraction using the polynomial indicator function, whose
coefficients encode many properties of the fraction. We show that the
coefficient of a simple or interaction term can be written using the counts of
its levels. This apparently simple remark leads to major consequence, including
a convolution formula for the counts. We also show that the mean aberration of
a term over the permutation of its levels provides a connection with the
variance of the level counts. Moreover, using mean aberrations for symmetric
designs with prime, we derive a new formula for computing the GWLP of
. It is computationally easy, does not use complex numbers and
also provides a clear way to interpret the GWLP. As case studies, we consider
non-isomorphic orthogonal arrays that have the same GWLP. The different
distributions of the mean aberrations suggest that they could be used as a
further tool to discriminate between fractions.Comment: 16 pages, 1 figur
Discriminating Between Optimal Follow-Up Designs
Sequential experimentation is often employed in process optimization wherein a series of small experiments are run successively in order to determine which experimental factor levels are likely to yield a desirable response. Although there currently exists a framework for identifying optimal follow-up designs after an initial experiment has been run, the accepted methods frequently point to multiple designs leaving the practitioner to choose one arbitrarily. In this thesis, we apply preposterior analysis and Bayesian model-averaging to develop a methodology for further discriminating between optimal follow-up designs while controlling for both parameter and model uncertainty
Design and Analysis of Screening Experiments Assuming Effect Sparsity
Many initial experiments for industrial and engineering applications employ screening designs to determine which of possibly many factors are significant. These screening designs are usually a highly fractionated factorial or a Plackett-Burman design that focus on main effects and provide limited information for interactions. To help simplify the analysis of these experiments, it is customary to assume that only a few of the effects are actually important; this assumption is known as âeffect sparsityâ. This dissertation will explore both design and analysis aspects of screening experiments assuming effect sparsity.
In 1989, Russell Lenth proposed a method for analyzing unreplicated factorials that has become popular due to its simplicity and satisfactory power relative to alternative methods. We propose and illustrate the use of p-values, estimated by simulation, for Lenth t-statistics. This approach is recommended for its versatility. Whereas tabulated critical values are restricted to the case of uncorrelated estimates, we illustrate the use of p-values for both orthogonal and nonorthogonal designs. For cases where there is limited replication, we suggest computing t-statistics and p-values using an estimator that combines the pure error mean square with a modified Lenthâs pseudo standard error.
Supersaturated designs (SSDs) are designs that examine more factors than runs available. SSDs were introduced to handle situations in which a large number of factors are of interest but runs are expensive or time-consuming. We begin by assessing the null model performance of SSDs when using all-subsets and forward selection regression. The propensity for model selection criteria to overfit is highlighted. We subsequently propose a strategy for analyzing SSDs that combines all-subsets regression and permutation tests. The methods are illustrated for several examples.
In contrast to the usual sequential nature of response surface methods (RSM), recent literature has proposed both screening and response surface exploration using only one three-level design. This approach is named âone-step RSMâ. We discuss and illustrate two shortcomings of the current one-step RSM designs and analysis. Subsequently, we propose a new class of three-level designs and an analysis strategy unique to these designs that will address these shortcomings and aid the user in being appropriately advised as to factor importance. We illustrate the designs and analysis with simulated and real data