2,097 research outputs found

    Statistical analysis of real manufacturing process data

    Get PDF
    Tématem této diplomové práce je statistická regulace výrobního procesu. Cílem bylo analyzovat data z reálného technologického procesu revolverového vstřikovacího lisu. Analýza byla provedena za užití statistického testování hypotéz, analýzy rozptylu, obecného lineárního modelu a analýzy způsobilosti procesu. Analýza dat byla provedena ve statistickém softwaru Minitab 16.This diploma thesis deals with statistical process control. Its goal is to analyze data from real manufacturing process of the revolver molding machine. The analysis is accomplished using the statistical hypothesis testing, the analysis of variance, the general linear model and the analysis of the process capability. The analysis of the data is done in statistical software Minitab 16.

    On confidence intervals construction for measurement system capability indicators

    Get PDF
    There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification.For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi.In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelines in the use of the GCI approach.We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.Fil: Dianda, Daniela Fernanda. Consejo Nacional de Investigaciones CientĂ­ficas y TĂŠcnicas. Centro CientĂ­fico TecnolĂłgico Conicet - Rosario; Argentina. Universidad Nacional de Rosario. Facultad de Ciencias econĂłmicas y EstadĂ­stica. Escuela de EstadĂ­stica. Instituto de Investigaciones TeĂłricas y Aplicadas; ArgentinaFil: Pagura, JosĂŠ Alberto. Universidad Nacional de Rosario. Facultad de Ciencias econĂłmicas y EstadĂ­stica. Escuela de EstadĂ­stica. Instituto de Investigaciones TeĂłricas y Aplicadas; ArgentinaFil: Ballarini, NicolĂĄs Marcelo. Universidad Nacional de Rosario. Facultad de Ciencias econĂłmicas y EstadĂ­stica. Escuela de EstadĂ­stica. Instituto de Investigaciones TeĂłricas y Aplicadas; Argentin

    Variability in measurements of micro lengths with a white light interferometer

    Get PDF
    The effect of the discretionary set‐up parameters scan length and initial scanner position on the measurements of length performed with a white light interferometer microscope was investigated. In both analyses, two reference materials of nominal lengths 40 and 200 µm were considered. Random effects and mixed effects models were fitted to the data from two separate experiments. Punctual and interval estimates of variance components were provided

    Power Analysis Software for Educational Researchers

    Get PDF
    Forthcoming in Journal of Experimental Education, Jan. 2012.Given the importance of statistical power analysis in quantitative research and the repeated emphasis on it by AERA/APA journals, we examined the reporting practice of power analysis by the quantitative studies published in 12 education/psychology journals between 2005 and 200910. It was surprising to uncover that less than 2% of the studies conducted prospective power analysis. Another 3.54% computed observed power, a practice not endorsed by the literature on power analysis. In this paper, we clarify these two types of power analysis and discuss functionalities of eight programs/packages (G*Power 3.1.3, PASS 11, SAS/STAT 9.3, Stata 12, SPSS 19, SPSS/Sample Power 3.0.1, Optimal Design Software 2.01, and MLPowSim 1.0 BETA) to encourage proper and planned power analysis. Based on our review, we recommend two programs (SPSS/Sample Power and G*Power) for general-purpose univariate/multivariate analyses, and one (Optimal Design Software) for hierarchical/multilevel modeling and meta-analysis. Recommendations are also made for reporting power analysis results and exploring additional software. The paper concludes with an examination of the role of statistical power in research and viable alternatives to hypothesis testing

    Statistical practices of educational researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses

    Get PDF
    Articles published in several prominent educational journals were examined to investigate the use of data-analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, we also catalogued whether: (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected based on power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. Our analyses imply that researchers rarely verify that validity assumptions are satisfied and accordingly typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report effect size statistics, nor do they routinely perform power analyses to determine sample size requirements. We offer many recommendations to rectify these shortcomings.Social Sciences and Humanities Research Counci

    DEVELOPMENT OF A MODULAR AGRICULTURAL ROBOTIC SPRAYER

    Get PDF
    Precision Agriculture (PA) increases farm productivity, reduces pollution, and minimizes input costs. However, the wide adoption of existing PA technologies for complex field operations, such as spraying, is slow due to high acquisition costs, low adaptability, and slow operating speed. In this study, we designed, built, optimized, and tested a Modular Agrochemical Precision Sprayer (MAPS), a robotic sprayer with an intelligent machine vision system (MVS). Our work focused on identifying and spraying on the targeted plants with low cost, high speed, and high accuracy in a remote, dynamic, and rugged environment. We first researched and benchmarked combinations of one-stage convolutional neural network (CNN) architectures with embedded or mobile hardware systems. Our analysis revealed that TensorRT-optimized SSD-MobilenetV1 on an NVIDIA Jetson Nano provided sufficient plant detection performance with low cost and power consumption. We also developed an algorithm to determine the maximum operating velocity of a chosen CNN and hardware configuration through modeling and simulation. Based on these results, we developed a CNN-based MVS for real-time plant detection and velocity estimation. We implemented Robot Operating System (ROS) to integrate each module for easy expansion. We also developed a robust dynamic targeting algorithm to synchronize the spray operation with the robot motion, which will increase productivity significantly. The research proved to be successful. We built a MAPS with three independent vision and spray modules. In the lab test, the sprayer recognized and hit all targets with only 2% wrong sprays. In the field test with an unstructured crop layout, such as a broadcast-seeded soybean field, the MAPS also successfully sprayed all targets with only a 7% incorrect spray rate

    A panel model for predicting the diversity of internal temperatures from English dwellings

    Get PDF
    Using panel methods, a model for predicting daily mean internal temperature demand across a heterogeneous domestic building stock is developed. The model offers an important link that connects building stock models to human behaviour. It represents the first time a panel model has been used to estimate the dynamics of internal temperature demand from the natural daily fluctuations of external temperature combined with important behavioural, socio-demographic and building efficiency variables. The model is able to predict internal temperatures across a heterogeneous building stock to within ~0.71°C at 95% confidence and explain 45% of the variance of internal temperature between dwellings. The model confirms hypothesis from sociology and psychology that habitual behaviours are important drivers of home energy consumption. In addition, the model offers the possibility to quantify take-back (direct rebound effect) owing to increased internal temperatures from the installation of energy efficiency measures. The presence of thermostats or thermostatic radiator valves (TRV) are shown to reduce average internal temperatures, however, the use of an automatic timer is statistically insignificant. The number of occupants, household income and occupant age are all important factors that explain a proportion of internal temperature demand. Households with children or retired occupants are shown to have higher average internal temperatures than households who do not. As expected, building typology, building age, roof insulation thickness, wall U-value and the proportion of double glazing all have positive and statistically significant effects on daily mean internal temperature. In summary, the model can be used as a tool to predict internal temperatures or for making statistical inferences. However, its primary contribution offers the ability to calibrate existing building stock models to account for behaviour and socio-demographic effects making it possible to back-out more accurate predictions of domestic energy demand

    Modeling and Optimization of Stochastic Process Parameters in Complex Engineering Systems

    Get PDF
    For quality engineering researchers and practitioners, a wide number of statistical tools and techniques are available for use in the manufacturing industry. The objective or goal in applying these tools has always been to improve or optimize a product or process in terms of efficiency, production cost, or product quality. While tremendous progress has been made in the design of quality optimization models, there remains a significant gap between existing research and the needs of the industrial community. Contemporary manufacturing processes are inherently more complex - they may involve multiple stages of production or require the assessment of multiple quality characteristics. New and emerging fields, such as nanoelectronics and molecular biometrics, demand increased degrees of precision and estimation, that which is not attainable with current tools and measures. And since most researchers will focus on a specific type of characteristic or a given set of conditions, there are many critical industrial processes for which models are not applicable. Thus, the objective of this research is to improve existing techniques by not only expanding their range of applicability, but also their ability to more realistically model a given process. Several quality models are proposed that seek greater precision in the estimation of the process parameters and the removal of assumptions that limit their breadth and scope. An extension is made to examine the effectiveness of these models in both non-standard conditions and in areas that have not been previously investigated. Upon the completion of an in-depth literature review, various quality models are proposed, and numerical examples are used to validate the use of these methodologies

    Variable-selection ANOVA Simultaneous Component Analysis (VASCA)

    Get PDF
    Motivation: ANOVA Simultaneous Component Analysis (ASCA) is a popular method for the analysis of multivariate data yielded by designed experiments. Meaningful associations between factors/interactions of the experimental design and measured variables in the dataset are typically identified via significance testing, with permutation tests being the standard go-to choice. However, in settings with large numbers of variables, like omics (genomics, transcriptomics, proteomics and metabolomics) experiments, the ‘holistic’ testing approach of ASCA (all variables considered) often overlooks statistically significant effects encoded by only a few variables (biomarkers). Results: We hereby propose Variable-selection ASCA (VASCA), a method that generalizes ASCA through variable selection, augmenting its statistical power without inflating the Type-I error risk. The method is evaluated with simulations and with a real dataset from a multi-omic clinical experiment. We show that VASCA is more powerful than both ASCA and the widely adopted false discovery rate controlling procedure; the latter is used as a benchmark for variable selection based on multiple significance testing. We further illustrate the usefulness of VASCA for exploratory data analysis in comparison to the popular partial least squares discriminant analysis method and its sparse counterpart.Agencia Andaluza del Conocimiento, Regional Government of Andalucia , in SpainEuropean Commission B-TIC-136-UGR20State Research Agency (AEI) of SpainEuropean Social Fund (ESF) RYC2020-030536-IAEI PID2020-118139RB-I0
    • …
    corecore