130 research outputs found
Approximation to Minimum K-Extended Bayes Risk in Sequences of Finite State Decision Problems and Games
This paper treats a sequence version of the finite state compound decision problem. k-extended standards for the risk of sequence compound procedures are described. Bounds are developed for the risks of a family of procedures employing artificial randomization. In addition it is noted that the given formulation of the problem includes a game theoretic situation and three additional solutions are offered for this specialization
Admissible Estimators for the Total of a Stratified Population That Employ Prior Information
We consider the problem of the estimation of the total of a stratified finite population. For two levels of prior knowledge about the stratification, we provide Bayes and pseudo-Bayes estimators that make use of this prior knowledge in sensible ways. We then note that admissibility results can be established for these estimators using the techniques of Meeden and Ghosh (1982, 1983) and indicate some possible natural extensions of the present work
A Partial Inventory of Statistical Literature on Quality and Productivity through 1985
In 1984 the American Statistical Association established a Committee on Quality and Productivity. The Publications Subcommittee of this group adopted as one of its tasks the inventory of existing statistical literature on quality and productivity. This article is a partially annotated bibliography produced as a result of that effort
Engineering statistics
In this entry we seek to put into perspective some of the ways in which statistical methods contribute to modern engineering practice. Engineers design and oversee the production, operation, and maintenance of the products and systems that under-gird modern technological society. Their work is built on the foundation of physical (and increasingly biological) science. However, it is of necessity often highly empirical, because there simply isnt scientific theory complete and simple enough to effectively describe all of the myriad circumstances that arise even in engineering design, let alone those encountered in production, operation, and maintenance. As a consequence, engineering is an inherently statistical enterprise. Engineers must routinely collect, summarize, and draw inferences based on data, and it is hard to think of a statistical method that has no potential use in modern engineering. The above said, it is possible to identify classes of statistical methods that have traditionally been associated with engineering applications and some that are increasingly important to the field. This encyclopedia entry will identify some of those and indicate their place in modern engineering practice, with no attempt to provide technical details of their implementation. --
Development Programs for One-Shot Systems Using Multiple-State Design Reliability Models
Design reliability at the beginning of a product development program is typically low and development costs can account for a large proportion of total product cost. We consider how to conduct development programs (series of tests and redesigns) for one-shot systems (which are destroyed at first use or during testing). In rough terms, our aim is to both achieve high final design reliability and spend as little of a fixed budget as possible on development. We employ multiple-state reliability models. Dynamic programming is used to identify a best test-and-redesign strategy and is shown to be presently computationally feasible for at least 5-state models. Our analysis is flexible enough to allow for the accelerated stress testing needed in the case of ultra-high reliability requirements, where testing otherwise provides little information on design reliability change
Likelihood-Based Inference in Some Continuous Exponential Families With Unknown Threshold
We consider likelihood-based inference in some continuous exponential families with unknown threshold parameters. The introduction of threshold parameters necessitates modification of the standard asymptotic arguments and some possibly unexpected limiting distributions result
Development programs for one-shot systems using multiple-state design reliability models
Design reliability at the beginning of a product development program is typically low, and development costs can account for a large proportion of total product cost. We consider how to conduct development programs (series of tests and redesigns) for one-shot systems (which are destroyed at first use or during testing). In rough terms, our aim is to both achieve high final design reliability and spend as little of a fixed budget as possible on development. We employ multiple-state reliability models. Dynamic programming is used to identify a best test-and-redesign strategy and is shown to be presently computationally feasible for at least 5-state models. Our analysis is flexible enough to allow for the accelerated stress testing needed in the case of ultra-high reliability requirements, where testing otherwise provides little information on design reliability change
Majority Voting by Independent Classifiers Can Increase Error Rates
The technique of “majority voting” of classifiers is used in machine learning with the aim of constructing a new combined classification rule that has better characteristics than any of a given set of rules. The “Condorcet Jury Theorem” is often cited, incorrectly, as support for a claim that this practice leads to an improved classifier (i.e., one with smaller error probabilities) when the given classifiers are sufficiently good and are uncorrelated. We specifically address the case of two-category classification, and argue that a correct claim can be made for independent (not just uncorrelated) classification errors (not the classifiers themselves), and offer an example demonstrating that the common claim is false. Supplementary materials for this article are available online
The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean µ and variance 2 σ , the expected value of the sample variance is 2 σ . The generalization justifies the use of the usual standard error of the sample mean in possibly heteroscedastic situations, and motivates elementary estimators in even unbalanced linear random effects models. The latter both provides nontrivial examples and exercises concerning method-of-moments estimation, and also helps demystify the whole matter of variance component estimation. This is illustrated in general for the simple one-way context and for a specific unbalanced two-factor hierarchical data structure
Likelihood-based statistical estimation from quantized data
Most standard statistical methods treat numerical data as if they were real (infinite-number-of-decimal-places) observations. The issue of quantization or digital resolution can render such methods inappropriate and misleading. This article discusses some of the difficulties of interpretation and corresponding difficulties of inference arising in even very simple measurement contexts, once the presence of quantization is admitted. It then argues (using the simple case of confidence interval estimation based on a quantized random sample from a normal distribution as a vehicle) for the use of statistical methods based on rounded data likelihood functions as an effective way of handling the matter
- …