120 research outputs found

    Engineering statistics

    Get PDF
    In this entry we seek to put into perspective some of the ways in which statistical methods contribute to modern engineering practice. Engineers design and oversee the production, operation, and maintenance of the products and systems that under-gird modern technological society. Their work is built on the foundation of physical (and increasingly biological) science. However, it is of necessity often highly empirical, because there simply isnt scientific theory complete and simple enough to effectively describe all of the myriad circumstances that arise even in engineering design, let alone those encountered in production, operation, and maintenance. As a consequence, engineering is an inherently statistical enterprise. Engineers must routinely collect, summarize, and draw inferences based on data, and it is hard to think of a statistical method that has no potential use in modern engineering. The above said, it is possible to identify classes of statistical methods that have traditionally been associated with engineering applications and some that are increasingly important to the field. This encyclopedia entry will identify some of those and indicate their place in modern engineering practice, with no attempt to provide technical details of their implementation. --

    Development programs for one-shot systems using multiple-state design reliability models

    Get PDF
    Design reliability at the beginning of a product development program is typically low and development costs can account for a large proportion of total product cost. We consider how to conduct development programs (series of tests and redesigns) for one-shot systems (which are destroyed at first use or during testing). In rough terms, our aim is to both achieve high final design reliability and spend as little of a fixed budget as possible on development. We employ multiple-state reliability models. Dynamic programming is used to identify a best test-and-redesign strategy and is shown to be presently computationally feasible for at least 5-state models. Our analysis is flexible enough to allow for the accelerated stress testing needed in the case of ultra-high reliability requirements, where testing otherwise provides little information on design reliability change. --development programs,one-shot systems,multiple-state design reliability,test,redesign,optimal programs,dynamic programming,accelerated testing

    An Interactive Program for the Analysis of Data from Two-Level Factorial Experiments via Probability Plotting

    Get PDF
    An interactive computer program that expedites the analysis for unreplicated two-level factorial and fractional factorial experimental designs advocated by Daniel (1976) and Box, Hunter, and Hunter (1978) is presented. The program calculates estimated effects via the Yates algorithm, identifies statistically detectable effects via normal plots and half normal plots, fits candidate models via the reverse Yates algorithm, and enables evaluation of candidate models through residual plots. The program can handle the analysis of standard 2p-q fractional factorial experiments where p - q √ 7 and can be modified to allow p - q \u3e 7

    Likelihood-based statistical estimation from quantized data

    Get PDF
    Most standard statistical methods treat numerical data as if they were real (infinitenumber- of-decimal-places) observations. The issue of quantization or digital resolution is recognized by engineers and metrologists, but is largely ignored by statisticians and can render standard statistical methods inappropriate and misleading. This article discusses some of the difficulties of interpretation and corresponding difficulties of inference arising in even very simple measurement contexts, once the presence of quantization is admitted. It then argues (using the simple case of confidence interval estimation based on a quantized random sample from a normal distribution as a vehicle) for the use of statistical methods based on rounded data likelihood functions as an effective way of dealing with the issue. --

    A sufficient condition related to mistaken intuition about adjusted sums-of-squares in linear regression

    Get PDF
    We consider a misconception common among students of statistics involving "adjusted" and "unadjusted" sums-of-squares. While the presence of misconception has been noted before (e.g. Hamilton (1986)), we argue that it may be related to the language we use in describing the meaning of sums-of-squares. For linear regression with two independent variables, we then present a sufficient condition for SSR( X1 | X2 ) > SSR( X1 ) in terms of the signs of the sample correlations between pairs of predictor and response variables, and note how this sufficient condition may also be related to misconceptions held by some students of statistics. --

    Modeling Spectral–Temporal Data From Point Source Events

    Get PDF
    In recent years, a great deal of effort has been invested in developing sensors to detect, locate, and identify “energetic” electromagnetic events. When observed through one type of imaging spectrometer, these events produce a data record that contains complete spectral and temporal information over the event’s evolution. This article describes the development of a statistical model for the data produced by a particular spectral–temporal sensor. While the application is unique in some ways, this approach to model building may be useful in other related contexts. Several plots, estimated parameters, and some additional details for an equation are provided in the Appendix which is available as supplementary material online

    The expected sample variance of uncorrelated random variables with a common mean and applications in unbalanced random effects models

    Get PDF
    There is a little-known but very simple generalization of the standard result that for uncorrelated variables with a common mean and variance, the expected sample variance is the marginal variance. The generalization justifies the use of the usual standard error of the sample mean in possibly heteroscedastic situations and motivates some simple estimators for unbalanced linear random effects models. The latter is illustrated for the simple one-way context. --

    Calibration, error analysis, and ongoing measurement process monitoring for mass spectrometry

    Get PDF
    We consider problems of quantifying and monitoring accuracy and precision of measurement in mass spectrometry, particularly in contexts where there is unavoidable day-to-day/period-to-period changes in instrument sensitivity. First we consider the issue of estimating instrument sensitivity based on data from a typical calibration study. Simple method-of-moments methods, likelihood-based methods, and Bayes methods based on the one-way random effects model are illustrated. Then we consider subsequently assessing the precision of an estimate of a mole fraction of a gas of interest in an unknown. Finally, we turn to the problem of ongoing measurement process monitoring and illustrate appropriate set-up of Shewhart control charts in this application. --
    corecore