2,834 research outputs found

    Design of Experiments for Screening

    Full text link
    The aim of this paper is to review methods of designing screening experiments, ranging from designs originally developed for physical experiments to those especially tailored to experiments on numerical models. The strengths and weaknesses of the various designs for screening variables in numerical models are discussed. First, classes of factorial designs for experiments to estimate main effects and interactions through a linear statistical model are described, specifically regular and nonregular fractional factorial designs, supersaturated designs and systematic fractional replicate designs. Generic issues of aliasing, bias and cancellation of factorial effects are discussed. Second, group screening experiments are considered including factorial group screening and sequential bifurcation. Third, random sampling plans are discussed including Latin hypercube sampling and sampling plans to estimate elementary effects. Fourth, a variety of modelling methods commonly employed with screening designs are briefly described. Finally, a novel study demonstrates six screening methods on two frequently-used exemplars, and their performances are compared

    Design of experiments for non-manufacturing processes : benefits, challenges and some examples

    Get PDF
    Design of Experiments (DoE) is a powerful technique for process optimization that has been widely deployed in almost all types of manufacturing processes and is used extensively in product and process design and development. There have not been as many efforts to apply powerful quality improvement techniques such as DoE to improve non-manufacturing processes. Factor levels often involve changing the way people work and so have to be handled carefully. It is even more important to get everyone working as a team. This paper explores the benefits and challenges in the application of DoE in non-manufacturing contexts. The viewpoints regarding the benefits and challenges of DoE in the non-manufacturing arena are gathered from a number of leading academics and practitioners in the field. The paper also makes an attempt to demystify the fact that DoE is not just applicable to manufacturing industries; rather it is equally applicable to non-manufacturing processes within manufacturing companies. The last part of the paper illustrates some case examples showing the power of the technique in non-manufacturing environments

    Screening interacting factors in a wireless network testbed using locating arrays

    Get PDF
    Wireless systems exhibit a wide range of configurable parameters (factors), each with a number of values (levels), that may influence performance. Exhaustively analyzing all factor interactions is typically not feasible in experimental systems due to the large design space. We propose a method for determining which factors play a significant role in wireless network performance with multiple performance metrics (response variables). Such screening can be used to reduce the set of factors in subsequent experimental testing, whether for modelling or optimization. Our method accounts for pairwise interactions between the factors when deciding significance, because interactions play a significant role in real-world systems. We utilize locating arrays to design the experiment because they guarantee that each pairwise interaction impacts a distinct set of tests. We formulate the analysis as a problem in compressive sensing that we solve using a variation of orthogonal matching pursuit, together with statistical methods to determine which factors are significant. We evaluate the method using data collected from the w-iLab.t Zwijnaarde wireless network testbed and construct a new experiment based on the first analysis to validate the results. We find that the analysis exhibits robustness to noise and to missing data

    CFD-based process optimization of a dissolved air flotation system for drinking water production

    Get PDF
    Dissolved air flotation (DAF) has received more attention recently as a separation technique in both drinking water as well as wastewater treatment. However, the process as well as the preceding flocculation step is complex and not completely understood. Given the multiphase nature of the process, fluid dynamics studies are important to understand and optimize the DAF system in terms of operation and design. The present study is intended towards a comprehensive computational analysis for design optimization of the treatment plant in Kluizen, Belgium. Setting up the modelling framework involving the multiphase flow problem is briefly discussed. 3D numerical simulations on a scaled down model of the DAF design were analysed. The flow features give better confidence, but the flocs escape through the outlet still prevails which is averse to the system performance. In order to improve the performance and ease of maintenance, design modifications have been proposed by using a perforated tube for water extraction and are found to be satisfactory. The discussion is further reinforced through validating the numerical model against the experimental findings for stratified flow conditions

    Self-Validated Ensemble Modelling

    Get PDF
    An important objective when performing designed experiments is to build models that predict future performance of a system in study; e.g. predict future yields of a bio-process used to manufacture therapeutic proteins. Because experimentation is costly experimental designs are structured to be efficient in terms of the number of trials while providing substantial information about the behavior of the physical system. The strategy to build accurate predictive models in larger data sets is to partition the data into a training set, used to fit the model, and a validation set to access prediction performance. Models are selected that have the lowest prediction error on the validation set. However, designed experiments are usually small in sample size and have a fixed structure which precludes partitioning of any kind; the entire set must be used for training. Contemporary methods use information criteria like the AICc or BIC with model algorithms such as Forward Selection or Lasso to select candidate models. These surrogate prediction measures often produce models with poor prediction performance relative to models selected using a validation procedure such ascross validation. This approach also uses a single fit from a model algorithm which we show to be insufficient. We propose a novel approach that allows the original data set to function as both a training set and a validation set. We accomplish this auto-validation strategy by employing a unique fractionally re-weighted bootstrapping technique. The weighting scheme is structured to induce anti-correlation between the original set and the auto-validation copy. We randomly assign new fractional weights using the bootstrap algorithm and fit a predictive model. This procedure is iterated many times producing a new model each time. The final model is the average of these models. We refer to this new methodology as Self-Validated Ensemble Modeling (SVEM). In this dissertation we investigate the performance of the SVEM algorithm across various scenarios: different model selection algorithms, different designs with varying sample sizes, model noise levels, and sparsity. This investigation shows that SVEM outperforms contemporary one-shot model selection approaches

    Model selection via Bayesian information capacity designs for generalised linear models

    Get PDF
    The first investigation is made of designs for screening experiments where the response variable is approximated by a generalised linear model. A Bayesian information capacity criterion is defined for the selection of designs that are robust to the form of the linear predictor. For binomial data and logistic regression, the effectiveness of these designs for screening is assessed through simulation studies using all-subsets regression and model selection via maximum penalised likelihood and a generalised information criterion. For Poisson data and log-linear regression, similar assessments are made using maximum likelihood and the Akaike information criterion for minimally-supported designs that are constructed analytically. The results show that effective screening, that is, high power with moderate type I error rate and false discovery rate, can be achieved through suitable choices for the number of design support points and experiment size. Logistic regression is shown to present a more challenging problem than log-linear regression. Some areas for future work are also indicated

    Design for Smooth Models over Complex Regions

    Get PDF

    LASSO-OPTIMAL SUPERSATURATED DESIGN AND ANALYSIS FOR FACTOR SCREENING IN SIMULATION EXPERIMENTS

    Get PDF
    Complex systems such as large-scale computer simulation models typically involve a large number of factors. When investigating such a system, screening experiments are often used to sift through these factors to identify a subgroup of factors that most significantly influence the interested response
    • …
    corecore