769 research outputs found

    A Note on Teaching Binomial Confidence Intervals

    Get PDF
    For constructing confidence intervals for a binomial proportion pp, Simon (1996, Teaching Statistics) advocates teaching one of two large-sample alternatives to the usual zz-intervals p^±1.96×S.E(p^)\hat{p} \pm 1.96 \times S.E(\hat{p}) where S.E.(p^)=p^×(1−p^)/nS.E.(\hat{p}) = \sqrt{ \hat{p} \times (1 - \hat{p})/n}. His recommendation is based on the comparison of the closeness of the achieved coverage of each system of intervals to their nominal level. This teaching note shows that a different alternative to zz-intervals, called qq-intervals, are strongly preferred to either method recommended by Simon. First, qq-intervals are more easily motivated than even zz-intervals because they require only a straightforward application of the Central Limit Theorem (without the need to estimate the variance of p^\hat{p} and to justify that this perturbation does not affect the normal limiting distribution). Second, qq-intervals do not involve ad-hoc continuity corrections as do the proposals in Simon. Third, qq-intervals have substantially superior achieved coverage than either system recommended by Simon

    These Aren\u27t Your Mothers and Fathers Experiments (Abstract)

    Get PDF
    Informal experimentation is as old as humankind. Statisticians became seriously involved in the conduct of experiments during the early 1900s when they devised methods for the design of efficient field trials to improve agricultural yields. During the 1900s statistical methodology was developed for many complicated sampling settings and a wide variety of design objectives

    Screening Procedures to Identify Robust Product or Process Designs Using Fractional Factorial Experiments

    Get PDF
    In many quality improvement experiments, there are one or more ``control'' factors that can be modified to determine a final product design or manufacturing process, and one or more ``environmental'' (or `` noise'') factors that vary under field or manufacturing conditions. In many applications, the product design or process design is considered seriously flawed if its performance is poor for any level of the environmental factor. For example, if a particular prosthetic heart valve design has poor fluid flow characteristics for certain flow rates, then a manufacturer will not want to put this design into production. Thus this paper considers cases when it is appropriate to measure a product's quality to be its {\em worst} performance over the levels of the environmental factor. We consider the frequently occurring case of combined-array experiments and extend the subset selection methodology of Gupta (1956, 1965) to provide statistical screening procedures to identify product designs that maximize the worst case performance of the design over the environmental conditions for such experiments. A case study is provided to illustrate the proposed procedures

    Selection and Screening Procedures to Determine Optimal Product Designs. (REVISED, April 1997)

    Get PDF
    To compare several promising product designs, manufacturers must measure their performance under multiple environmental conditions. In many applications, a product design is considered to be seriously flawed if its performance is poor under any level of the environmental factor. For example, if a particular automobile battery design does not function well under some temperature conditions, then a manufacturer may not want to put this design into production. Thus, in this paper we consider the overall measure of a given product's quality to be its worst performance over the environmental levels. We develop statistical procedures to identify (a near) the optimal product design among a given set of product designs, i.e., the manufacturing design associated with the greatest overall measure of performance. We accomplish this for intuitive procedures based on the split-plot experimental design (and the randomized complete block design as a special case); split-plot designs have the essential structure of a product array and the practical convenience of local randomization. Two classes of statistical procedures are provided. In the first, the delta-best formulation of selection problems, we determine the number of replications of the basic split-plot design that are needed to guarantee, with a given confidence level, the selection of a product design whose minimum performance is within a specified amount, delta, of the performance of the optimal product design. In particular, if the difference between the quality of the best and 2nd best manufacturing designs is delta or more, then the procedure guarantees that the best design will be selected with specified probability. For applications where a split-plot experiment involving several product designs has been completed without the planning required of the delta-best formulation, we provide procedures to construct a "confidence subset" of the manufacturing designs; the selected subset contains the optimal product design with a prespecified confidence level. The latter is called the subset selection formulation of selection problems. Examples are provided to illustrate the procedures

    The Use of Subset Selection in Combined Array Experiments to Determine Optimal Product or Process Designs. (REVISED, June 1997)

    Get PDF
    A number of authors in the quality control literature have advocated the use of combined-arrays in screening experiments to identify robust product or process designs [Shoemaker, Tsui, and Wu (1991); Nair et al. (1992); Myers, Khuri, and Vining (1992), for example]. This paper considers a product manufacturing or process design setting in which there are several factors under the control of the manufacturer, called control settings, and other environmental (noise) factors that that vary under field or manufacturing conditions. We show how Gupta's subset selection philosophy can be used in such a quality improvement setting to identify combinations of the levels of the control factors that correspond either to products that are robust to environmental variations during their use or to processes that fabricate items whose quality is independent of the variations in the raw materials used in their manufacture. [Gupta (1956, 1965)]

    09181 Abstracts Collection -- Sampling-based Optimization in the Presence of Uncertainty

    Get PDF
    This Dagstuhl seminar brought together researchers from statistical ranking and selection; experimental design and response-surface modeling; stochastic programming; approximate dynamic programming; optimal learning; and the design and analysis of computer experiments with the goal of attaining a much better mutual understanding of the commonalities and differences of the various approaches to sampling-based optimization, and to take first steps toward an overarching theory, encompassing many of the topics above

    Pointwise consistency of the kriging predictor with known mean and covariance functions

    Full text link
    This paper deals with several issues related to the pointwise consistency of the kriging predictor when the mean and the covariance functions are known. These questions are of general importance in the context of computer experiments. The analysis is based on the properties of approximations in reproducing kernel Hilbert spaces. We fix an erroneous claim of Yakowitz and Szidarovszky (J. Multivariate Analysis, 1985) that the kriging predictor is pointwise consistent for all continuous sample paths under some assumptions.Comment: Submitted to mODa9 (the Model-Oriented Data Analysis and Optimum Design Conference), 14th-19th June 2010, Bertinoro, Ital

    Cosmic Calibration: Constraints from the Matter Power Spectrum and the Cosmic Microwave Background

    Get PDF
    Several cosmological measurements have attained significant levels of maturity and accuracy over the last decade. Continuing this trend, future observations promise measurements of the statistics of the cosmic mass distribution at an accuracy level of one percent out to spatial scales with k~10 h/Mpc and even smaller, entering highly nonlinear regimes of gravitational instability. In order to interpret these observations and extract useful cosmological information from them, such as the equation of state of dark energy, very costly high precision, multi-physics simulations must be performed. We have recently implemented a new statistical framework with the aim of obtaining accurate parameter constraints from combining observations with a limited number of simulations. The key idea is the replacement of the full simulator by a fast emulator with controlled error bounds. In this paper, we provide a detailed description of the methodology and extend the framework to include joint analysis of cosmic microwave background and large scale structure measurements. Our framework is especially well-suited for upcoming large scale structure probes of dark energy such as baryon acoustic oscillations and, especially, weak lensing, where percent level accuracy on nonlinear scales is needed.Comment: 15 pages, 14 figure
    • …
    corecore