140 research outputs found

    Design and Analysis of Screening Experiments Assuming Effect Sparsity

    Get PDF
    Many initial experiments for industrial and engineering applications employ screening designs to determine which of possibly many factors are significant. These screening designs are usually a highly fractionated factorial or a Plackett-Burman design that focus on main effects and provide limited information for interactions. To help simplify the analysis of these experiments, it is customary to assume that only a few of the effects are actually important; this assumption is known as ‘effect sparsity’. This dissertation will explore both design and analysis aspects of screening experiments assuming effect sparsity. In 1989, Russell Lenth proposed a method for analyzing unreplicated factorials that has become popular due to its simplicity and satisfactory power relative to alternative methods. We propose and illustrate the use of p-values, estimated by simulation, for Lenth t-statistics. This approach is recommended for its versatility. Whereas tabulated critical values are restricted to the case of uncorrelated estimates, we illustrate the use of p-values for both orthogonal and nonorthogonal designs. For cases where there is limited replication, we suggest computing t-statistics and p-values using an estimator that combines the pure error mean square with a modified Lenth’s pseudo standard error. Supersaturated designs (SSDs) are designs that examine more factors than runs available. SSDs were introduced to handle situations in which a large number of factors are of interest but runs are expensive or time-consuming. We begin by assessing the null model performance of SSDs when using all-subsets and forward selection regression. The propensity for model selection criteria to overfit is highlighted. We subsequently propose a strategy for analyzing SSDs that combines all-subsets regression and permutation tests. The methods are illustrated for several examples. In contrast to the usual sequential nature of response surface methods (RSM), recent literature has proposed both screening and response surface exploration using only one three-level design. This approach is named “one-step RSM”. We discuss and illustrate two shortcomings of the current one-step RSM designs and analysis. Subsequently, we propose a new class of three-level designs and an analysis strategy unique to these designs that will address these shortcomings and aid the user in being appropriately advised as to factor importance. We illustrate the designs and analysis with simulated and real data

    Training samples in objective Bayesian model selection

    Full text link
    Central to several objective approaches to Bayesian model selection is the use of training samples (subsets of the data), so as to allow utilization of improper objective priors. The most common prescription for choosing training samples is to choose them to be as small as possible, subject to yielding proper posteriors; these are called minimal training samples. When data can vary widely in terms of either information content or impact on the improper priors, use of minimal training samples can be inadequate. Important examples include certain cases of discrete data, the presence of censored observations, and certain situations involving linear models and explanatory variables. Such situations require more sophisticated methods of choosing training samples. A variety of such methods are developed in this paper, and successfully applied in challenging situations

    Construction and analysis of experimental designs

    Get PDF
    This thesis seeks to put into focus the analysis of experimental designs and their construction. It concentrates on the construction of fractional factorial designs (FFDs) using various aspects and applications. These dierent experimental designs and their applications, including how they are constructed with respect to the situation under consideration, are of interest in this study. While there is a wide range of experimental designs and numerous dierent constructions, this thesis focuses on FFDs and their applications. Experimental design is a test or a series of tests in which purposeful changes are made to the input variables of a process or system so that we may observe and identify the reasons for changes that may be noted in the output response (Montgomery (2014)). Experimental designs are important because their design and analysis can in uence the outcome and response of the intended action. In this research, analysing experimental designs and their construction intends to reveal how important they are in research experiments. Chapter 1 introduces the concept of experimental designs and their principal and oers a general explanation for factorial experiment design and FFDs. Attention is then given to the general construction and analysis of FFDs, including one-half and one-quarter fractions, Hadamard matrices (H), Balanced Incomplete Block Design (BIBD), Plackett-Burman (PB) designs and regression modelling. Chapter 2 presents an overview of the screening experiments and the literature review regarding the project. Chapter 3 introduces the rst part of the project, which is construction and analysis of edge designs from skew-symmetric supplementary dierence sets (SDSs). Edge designs were introduced by Elster and Neumaier (1995) using conference matrices and were proved to be robust. One disadvantage is that the known edge designs in the literature can be constructed when a conference matrix exists. In this chapter, we introduce a new class of edge designs- these are constructed from skew-symmetric SDSs. These designs are particularly useful, since they can be applied in experiments with an even number of factors, and they may exist for orders where conference matrices do not exist. The same model robustness is archived, as with traditional edge designs. We give details of the methodology used and provide some illustrative examples of this new approach. We also show that the new designs have good D-eciencies when applied to rst-order models, then complete the experiment with interaction in the second stage. We also show the application of models for new constructions. Chapter 4 presents the second part of the project, which is construction and analysis two-level supersaturated designs (SSDs) from Toeplitz matrices. The aim of the screening experiments was to identify the active factors from a large quantity of factors that may in uence the response y. SSDs represent an important class of screening experiments, whereby many factors are investigated using only few experimental runs; this process costs less than classical factorial designs. In this chapter, we introduce new SSDs that are constructed from Toeplitz matrices. This construction uses Toeplitz and permutation matrices of order n to obtain E(s2)- optimal two-level SSDs. We also study the properties of the constructed designs and use certain established criteria to evaluate these designs. We then give some detailed examples regarding this approach, and consider the performance of these designs with respect to dierent data analysis methods. Chapter 5 introduces the third part of the project, which is examples and comparison of the constructed design using real data in mathematics. Mathematics has strong application in dierent elds of human life. The Trends in International Mathematics and Science Study(TIMSS) is one of the worlds most eective global assessments of student achievement in both mathematics and science. The research in this thesis sought to determine the most eective factors that aect student achievement in mathematics. Four identied factors aect this problem. The rst is student factors: age, health, number of students in a class, family circumstances, time of study, desire, behaviour, achievements, media (audio and visual), rewards, friends, parents' goals and gender. The second is classroom environment factors: suitable and attractive and equipped with educational tools. The third is curriculum factors: easy or dicult. The fourth is the teacher: wellquali ed or not, and punishment. In this chapter, we detailed the methodology and present some examples, and comparisons of the constructed designs using real data in mathematics . The data comes from surveys contacted in schools in Saudi Arabia. The data are collected by the middle stage schools in the country and are available to Saudi Arabian citizen. Two main methods to collect real data were used: 1/ the mathematics scores for students' nal exams were collected from the schools; 2/ student questionnaires were conducted by disseminating 16-question questionnaires to students. The target population was 2,585 students in 22 schools. Data were subjected to regression analyses and the edge design method, with the nding that the main causes of low achievement were rewards, behaviour, class environment, educational tools and health. Chapter 6 surveys the work of this thesis and recommends further avenues of research

    Modeling and Optimization of Stochastic Process Parameters in Complex Engineering Systems

    Get PDF
    For quality engineering researchers and practitioners, a wide number of statistical tools and techniques are available for use in the manufacturing industry. The objective or goal in applying these tools has always been to improve or optimize a product or process in terms of efficiency, production cost, or product quality. While tremendous progress has been made in the design of quality optimization models, there remains a significant gap between existing research and the needs of the industrial community. Contemporary manufacturing processes are inherently more complex - they may involve multiple stages of production or require the assessment of multiple quality characteristics. New and emerging fields, such as nanoelectronics and molecular biometrics, demand increased degrees of precision and estimation, that which is not attainable with current tools and measures. And since most researchers will focus on a specific type of characteristic or a given set of conditions, there are many critical industrial processes for which models are not applicable. Thus, the objective of this research is to improve existing techniques by not only expanding their range of applicability, but also their ability to more realistically model a given process. Several quality models are proposed that seek greater precision in the estimation of the process parameters and the removal of assumptions that limit their breadth and scope. An extension is made to examine the effectiveness of these models in both non-standard conditions and in areas that have not been previously investigated. Upon the completion of an in-depth literature review, various quality models are proposed, and numerical examples are used to validate the use of these methodologies

    Advanced Statistical Tools for Six Sigma and other Industrial Applications

    Get PDF
    Six Sigma is a methodological approach and philosophy for quality improvement in operations management; its main objectives are identifying and removing the causes of defects, and minimizing variability in manufacturing and business processes. To do so, Six Sigma combines managerial and statistical tools, with the creation of a dedicated organizational structure. In this doctoral thesis and the three years of study and research, we have had the purpose to advance the potential applications of the methodology and its tools; with a specific attention on issues and challenges that typically prevent the realization of the expected financial and operational gains that a company pursue in applying the Six Sigma approach. Small and medium sized enterprises (SMEs), for instance, very often incur into such issues, for structural and infrastructural constraints. The overall application of the methodology in SMEs was the focus of the initial research effort and it has been studied with a case study approach. Then, on this basis, most of our research has been turned to the rigorous methodological advancement of specific statistical tools for Six Sigma, and in a broader sense, for other industrial applications. Specifically, the core contribution of this doctoral thesis lies in the development of both managerial and/or statistical tools for the Six Sigma toolbox. Our work ranges from a decision making tool, which integrates a response latency measure with a well-known procedure for alternatives prioritization; to experimental design tools covering both planning and analysis strategies for screening experiments; to, finally, an initial effort to explore and develop a research agenda based on issues related to conjoint analysis and discrete choice experiments.Six Sigma is a methodological approach and philosophy for quality improvement in operations management; its main objectives are identifying and removing the causes of defects, and minimizing variability in manufacturing and business processes. To do so, Six Sigma combines managerial and statistical tools, with the creation of a dedicated organizational structure. In this doctoral thesis and the three years of study and research, we have had the purpose to advance the potential applications of the methodology and its tools; with a specific attention on issues and challenges that typically prevent the realization of the expected financial and operational gains that a company pursue in applying the Six Sigma approach. Small and medium sized enterprises (SMEs), for instance, very often incur into such issues, for structural and infrastructural constraints. The overall application of the methodology in SMEs was the focus of the initial research effort and it has been studied with a case study approach. Then, on this basis, most of our research has been turned to the rigorous methodological advancement of specific statistical tools for Six Sigma, and in a broader sense, for other industrial applications. Specifically, the core contribution of this doctoral thesis lies in the development of both managerial and/or statistical tools for the Six Sigma toolbox. Our work ranges from a decision making tool, which integrates a response latency measure with a well-known procedure for alternatives prioritization; to experimental design tools covering both planning and analysis strategies for screening experiments; to, finally, an initial effort to explore and develop a research agenda based on issues related to conjoint analysis and discrete choice experiments

    Temporal Dynamics of Benthic Responses to Habitat Disturbance in Coastal Plain Headwaters of Southwestern Louisiana

    Get PDF
    Weak biotic responses to habitat gradients within Northern Gulf of Mexico streams have been attributed to spatial and temporal variability. Landscape and in-stream habitat descriptions are presented for watersheds within Pleistocene terraces of the Coastal Plains geomorphic province of Louisiana, USA. Geologic influences on stream habitat were inferred by comparing multivariate ordinations on physicochemical measurements between terraces. Seasonal variability was assessed during a drought year (2011) and a typical water year (2013). Within coastal plains of Louisiana, stream condition was more similar within terraces than within river basins. Permutational MANOVA models indicated significantly different stream habitat between Uplands and Prairie, with intermediate habitat in Flatwoods. Seasonal differences were detected more frequently during normal flow condition, suggesting that baseflow impacts habitat heterogeneity between adjacent terraces. Macroinvertebrates were collected throughout a drought year at stream sites stratified among coastal plain terraces to quantify spatial and temporal variability and identify functional habitat gradients. Macroinvertebrate assemblages differed between Uplands and Prairie terraces, especially regarding insect taxa, which were associated with better water quality and structurally complex habitat. Drought and other disturbances selected against lotic taxa expected in the intermediate Flatwoods terrace. Widening the lateral scope of the study landscape helped identify habitat thresholds and define regional habitat preference of individual taxa. Aquatic habitat improvement in Prairie terrace bayous should include restoring baseflow, increasing structural complexity and protecting macroinvertebrate source populations in the Uplands. Aquatic insect larvae are important bio-indicators and flexible life histories of many taxa may reflect regional or seasonal variability in environmental conditions. Larval development and reproductive strategy inferred from seasonal size distributions are presented for specimens of Caenis sp. (Ephemeroptera: Caenidae) in the coastal plain terraces of Louisiana. Influence of regional drought, landscape features and water quality on growth rate, terminal size and voltinism are examined. Caenis sp. in subtropical Louisiana exhibited bivoltine emergences in November and July. Size at instar development class did not differ by terrace, but was influenced by local water quality (e.g., orthophosphate concentration, specific conductance and biochemical oxygen demand). Maintenance of baseflow during drought enhanced abundance of Caenis larvae in streams with chronic disturbance from agriculture

    DOWNSTREAM EFFECTS ON DENITRIFICATION AND NITROUS OXIDE FROM AN ADVANCED WASTEWATER TREATMENT PLANT UPGRADE

    Get PDF
    All humans excrete waste. In developed countries, this waste is often treated at a wastewater treatment plant (WWTP). Eventually, the nutrient-rich, treated wastewater—effluent—enters a water body to be diluted or naturally processed. However, in the case of the Regina (Canada) WWTP, this dilution does not immediately occur as the effluent is released into the small, effluent-dominated system of Wascana Creek. This study capitalized on a novel opportunity to determine the effects of WWTP upgrade on: in-stream water quality, nitrogen (N) cycling measured as denitrification rates, and nitrous oxide (N2O) concentrations and emissions. Using a before-after-control-impact (BACI) design, nutrient, sediment, and gas samples were obtained before and after the upgrade at both upstream (control) and downstream (impact) sites on both Wascana Creek and the larger, downstream Qu’Appelle River. Although nitrate (NO3–) concentrations did not significantly change post-upgrade, I found that the upgrade significantly reduced concentrations of ammonium (NH4+) and toxic un-ionized ammonia (NH3), which declined by ~35 times from pre-upgrade values, ultimately mitigating potential toxicity (Environment Canada 1999). The WWTP significantly impacted denitrification rates at downstream sites. Denitrification rates at sites downstream of the WWTP were >1200 times the rates at the upstream site. While impacts were lesser, denitrification rates at the larger Qu’Appelle River downstream site were still >20 times the rates at the upstream site. Denitrification rates were unaffected by the upgrade. Moreover, NO3– saturation, a negative indicator of ecosystem health, existed both before and after the upgrade at impacted sites. To the best of my knowledge, aquatic N2O concentrations immediately downstream of the WWTP are the highest known values for a natural system. Concentrations reached as high as 114,000 percent saturation pre-upgrade and 110,000 percent saturation post-upgrade; no significant change was observed pre- vs. post-upgrade across all impacted sites. It was determined that not only did N2O concentrations from the WWTP effluent span an impact zone of ~5 km in Wascana Creek, but also that the origin of these extremely high concentrations came directly from the effluent. Predictors of both denitrification rates and N2O concentrations were identified. Nitrate was a predictor of denitrification, while NO3– and denitrification rates were significant predictors of N2O concentrations outside the effluent impact zone. This study showed that enhanced effluent N removal can help mitigate risk of NH3 toxicity; however, decreases in NH3 and NH4+ concentrations did not significantly impact downstream N2O emissions or denitrification rates

    Methane flux changes during irrigation experiment in boreal upland forest soil

    Get PDF
    Metaani (CH4) on kasvihuonekaasu, jolla on merkittävä vaikutus globaaliin ilmastoon. Maaperässä sitä muodostuu hapettomissa ja kuluu hapellisissa oloissa mikrobitoiminnan tuloksena. Yhdessä erilaisten metaanin kulkeutumismuotojen kanssa metaanin tuotanto ja kulutus määräävät suoraan maaperän metaanivuota. Boreaalisten lakimetsien katsotaan yleisesti toimivan metaaninieluina korkean metaanin kulutuksen vuoksi. Joissakin tutkimuksissa on kuitenkin havaittu boreaalisen lakimetsän maaperän muuttuvan metaanin lähteeksi pitkäkestoisen ja runsaan sadannan jälkeen. Tämän tutkimuksen tavoitteena oli tarkastella maaperän kosteuden vaikutuksia metaanivuohon manipulatiivisesti kasvatetun sadannan seurauksena pohjoisboreaalisen lakimetsän maaperässä, ja sitä kuinka orgaanisen karikkeen lisäys sekä sen ja juurten eristys ja maaperän lämpötilan kasvu vaikuttavat vuon ajallisiin muutoksiin. Tutkimus toteutettiin Kenttärovan metsässä Kittilässä, Suomessa kesällä 2018. Kokeessa käytettiin osaruutuasetelmaa, jossa maaperän kosteus oli pääruutumuuttuja ja maaperän lämmitys (T), orgaanisen karikkeen lisäys (A) sekä orgaanisen karikkeen ja juurten eristys (E) osaruutumuuttujia. Asetelmassa oli kaksi pääruutua: kastelu (I) ja kontrolli (C), joiden sisällä osaruutumuuttujat toistettiin kolme kertaa. Maaperän kosteuden vaikutuksen analysoimiseksi T, A ja E-käsittelyiden lisäksi kokeessa oli mukana osaruutumuuttuja, jossa ei ollut osaruututason käsittelyä (O) ja jolla oli neljä toistoa pääruutujen sisällä. Metaanivuo mitattiin vähintään kerran viikossa kammiomenetelmällä. Lisäksi maaperän kosteutta ja lämpötilaa mitattiin jatkuvatoimisesti. Käsittelyiden vaikutuksia analysoitiin sekä autoregressiivisillä että autoregressiivisillä heterogeenisillä kaksisuuntaisilla varianssianalyyseillä, TukeyHSD-menetelmällä, korrelaatioanalyyseillä ja yleistetyillä lineaarisilla malleilla. Maaperä ei muuttunut metaanin lähteeksi mutta tulokset osoittivat merkitseviä eroja kastelun ja kontrollin välillä, mikä viittasi maaperän kosteuden voimakkaaseen metaaninielua pienentävään vaikutukseen kaikilla käsittelytasoilla. Kaikilla käsittelyryhmillä oli pienimmät nielut elokuussa mahdollisesti korkean maaperän kosteuden vuoksi. IA-ryhmä tuotti pienimmät nielut luultavasti kaasudiffuusion vähenemisen ansiosta. IE-ryhmän nielut kasvoivat kasvavan maaperän kosteuden myötä mutta E-käsittelyt tuottivat yleisesti ristiriitaisia ja epävarmoja tuloksia, ja syyt nielujen muutosten takana jäivät selvittämättömiksi. T-käsittelyllä ei ollut merkitseviä vaikutuksia nieluihin luultavasti lämpötilamanipulaation epäonnistumisen vuoksi, minkä takia maaperän kosteuden ja lämpötilan yhteisvaikutuksia ei voitu tutkia luotettavasti. Tulosten perusteella nielujen muutokset ovat todennäköisesti olleet riippuvaisempia metaanin kulutuksesta kuin tuotannosta. Lisää tutkimusta tarvitaan erityisesti karikkeen lisäyksen, maaperän kosteuden ja lämpötilan kasvun yhteisvaikutuksesta metaanivuohon ajallisia koetoistoja hyödyntäen.Methane (CH4) is a greenhouse gas with a great impact on global climate. In the soil, it is produced in anoxic and consumed in oxic conditions by microbes. Together with different methane transport mechanisms, methane production and consumption directly regulate the resulting soil methane flux. Boreal upland forests are generally considered to act as methane sinks due to high methane consumption. However, some studies have shown a boreal upland forest soil turning from a methane sink to a source after long-term abundant precipitation. This study aimed to examine the effects of soil moisture on CH4 flux from simulated increase in rainfall in a northern boreal upland forest soil, and how simultaneous soil temperature increase, organic litter addition and organic litter and root exclusion affect the temporal changes in flux. The study was conducted in Kenttärova forest in Kittilä, Finland in summer 2018. Split-plot design was used in the experiment with soil moisture being the main treatment variable and soil warming (T), organic litter addition (A) and organic litter and root exclusion (E) subtreatment variables. The design included two main plots: irrigation (I) and control (C), within which each subtreatment was replicated three times. In addition to the T, A and E manipulations, plots without additional manipulations (O) were included for the assessment of the effect of only soil moisture increase, and were replicated four times within both main plots. Methane flux was measured at least once a week using chamber method. Soil moisture and temperature were also continuously measured. The treatment effects were analysed using both autoregressive heterogeneous and autoregressive two-way analyses of variance, TukeyHSD method, variable correlations and Generalized Linear Models. The soil did not turn into a methane source but the results showed significant differences between the irrigation and control site, indicating a strong decreasing effect of soil moisture on soil CH4 sink in all treatment levels. All treatments had lowest uptake rates in August, possibly as a result from highest soil moisture levels. IA treatment was the most effective in producing low uptake rates possibly due to the reduction in gas diffusion. E treatments had contrasting results, IE showing increases in uptake rate by increases in soil moisture but the causes remained unsolved and the results were highly uncertain. T treatment had no effect on uptake likely due to a failure to create soil temperature differences and thus the interactions were not reliably analysed. The results suggest that the changes may have been more related to changes in methane consumption than production. Further research is needed especially for examining the combined effect of litter addition, soil moisture and soil temperature increase on methane flux with multiple temporal replications of the experiment
    corecore