86,123 research outputs found

    Confidence intervals for reliability growth models with small sample sizes

    Get PDF
    Fully Bayesian approaches to analysis can be overly ambitious where there exist realistic limitations on the ability of experts to provide prior distributions for all relevant parameters. This research was motivated by situations where expert judgement exists to support the development of prior distributions describing the number of faults potentially inherent within a design but could not support useful descriptions of the rate at which they would be detected during a reliability-growth test. This paper develops inference properties for a reliability-growth model. The approach assumes a prior distribution for the ultimate number of faults that would be exposed if testing were to continue ad infinitum, but estimates the parameters of the intensity function empirically. A fixed-point iteration procedure to obtain the maximum likelihood estimate is investigated for bias and conditions of existence. The main purpose of this model is to support inference in situations where failure data are few. A procedure for providing statistical confidence intervals is investigated and shown to be suitable for small sample sizes. An application of these techniques is illustrated by an example

    Prediction intervals for reliability growth models with small sample sizes

    Get PDF
    Engineers and practitioners contribute to society through their ability to apply basic scientific principles to real problems in an effective and efficient manner. They must collect data to test their products every day as part of the design and testing process and also after the product or process has been rolled out to monitor its effectiveness. Model building, data collection, data analysis and data interpretation form the core of sound engineering practice.After the data has been gathered the engineer must be able to sift them and interpret them correctly so that meaning can be exposed from a mass of undifferentiated numbers or facts. To do this he or she must be familiar with the fundamental concepts of correlation, uncertainty, variability and risk in the face of uncertainty. In today's global and highly competitive environment, continuous improvement in the processes and products of any field of engineering is essential for survival. Many organisations have shown that the first step to continuous improvement is to integrate the widespread use of statistics and basic data analysis into the manufacturing development process as well as into the day-to-day business decisions taken in regard to engineering processes.The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved

    Optimal discrete stopping times for reliability growth tests

    Get PDF
    Often, the duration of a reliability growth development test is specified in advance and the decision to terminate or continue testing is conducted at discrete time intervals. These features are normally not captured by reliability growth models. This paper adapts a standard reliability growth model to determine the optimal time for which to plan to terminate testing. The underlying stochastic process is developed from an Order Statistic argument with Bayesian inference used to estimate the number of faults within the design and classical inference procedures used to assess the rate of fault detection. Inference procedures within this framework are explored where it is shown the Maximum Likelihood Estimators possess a small bias and converges to the Minimum Variance Unbiased Estimator after few tests for designs with moderate number of faults. It is shown that the Likelihood function can be bimodal when there is conflict between the observed rate of fault detection and the prior distribution describing the number of faults in the design. An illustrative example is provided

    Nonparametric bootstrapping of the reliability function for multiple copies of a repairable item modeled by a birth process

    Get PDF
    Nonparametric bootstrap inference is developed for the reliability function estimated from censored, nonstationary failure time data for multiple copies of repairable items. We assume that each copy has a known, but not necessarily the same, observation period; and upon failure of one copy, design modifications are implemented for all copies operating at that time to prevent further failures arising from the same fault. This implies that, at any point in time, all operating copies will contain the same set of faults. Failures are modeled as a birth process because there is a reduction in the rate of occurrence at each failure. The data structure comprises a mix of deterministic and random censoring mechanisms corresponding to the known observation period of the copy, and the random censoring time of each fault. Hence, bootstrap confidence intervals and regions for the reliability function measure the length of time a fault can remain within the item until realization as failure in one of the copies. Explicit formulae derived for the re-sampling probabilities greatly reduce dependency on Monte-Carlo simulation. Investigations show a small bias arising in re-sampling that can be quantified and corrected. The variability generated by the re-sampling approach approximates the variability in the underlying birth process, and so supports appropriate inference. An illustrative example describes application to a problem, and discusses the validity of modeling assumptions within industrial practice

    Estimating regional unemployment with mobile network data for Functional Urban Areas in Germany

    Get PDF
    The ongoing growth of cities due to better job opportunities is leading to increased labour-relatedcommuter flows in several countries. On the one hand, an increasing number of people commuteand move to the cities, but on the other hand, the labour market indicates higher unemployment ratesin urban areas than in the surrounding areas. We investigate this phenomenon on regional level byan alternative definition of unemployment rates in which commuting behaviour is integrated. Wecombine data from the Labour Force Survey (LFS) with dynamic mobile network data by small areamodels for the federal state North Rhine-Westphalia in Germany. From a methodical perspective, weuse a transformed Fay-Herriot model with bias correction for the estimation of unemployment ratesand propose a parametric bootstrap for the Mean Squared Error (MSE) estimation that includes thebias correction. The performance of the proposed methodology is evaluated in a case study based onofficial data and in model-based simulations. The results in the application show that unemploymentrates (adjusted by commuters) in German cities are lower than traditional official unemployment ratesindicate

    Estimating the size of dog populations in Tanzania to inform rabies control

    Get PDF
    Estimates of dog population sizes are a prerequisite for delivering effective canine rabies control. However, dog population sizes are generally unknown in most rabies-endemic areas. Several approaches have been used to estimate dog populations but without rigorous evaluation. We compare post-vaccination transects, household surveys, and school-based surveys to determine which most precisely estimates dog population sizes. These methods were implemented across 28 districts in southeast Tanzania, in conjunction with mass dog vaccinations, covering a range of settings, livelihoods, and religious backgrounds. Transects were the most precise method, revealing highly variable patterns of dog ownership, with human/dog ratios ranging from 12.4:1 to 181.3:1 across districts. Both household and school-based surveys generated imprecise and, sometimes, inaccurate estimates, due to small sample sizes in relation to the heterogeneity in patterns of dog ownership. Transect data were subsequently used to develop a predictive model for estimating dog populations in districts lacking transect data. We predicted a dog population of 2,316,000 (95% CI 1,573,000–3,122,000) in Tanzania and an average human/dog ratio of 20.7:1. Our modelling approach has the potential to be applied to predicting dog population sizes in other areas where mass dog vaccinations are planned, given census and livelihood data. Furthermore, we recommend post-vaccination transects as a rapid and effective method to refine dog population estimates across large geographic areas and to guide dog vaccination programmes in settings with mostly free roaming dog populations

    Bayesian correction for covariate measurement error: a frequentist evaluation and comparison with regression calibration

    Get PDF
    Bayesian approaches for handling covariate measurement error are well established, and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration (RC), arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to RC, and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next we describe the closely related maximum likelihood and multiple imputation approaches, and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of RC and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey

    The Evolution of the Galaxy Sizes in the NTT Deep Field: a Comparison with CDM Models

    Get PDF
    The sizes of the field galaxies with I<25 have been measured in the NTT Deep Field. Intrinsic sizes have been obtained after deconvolution of the PSF with a multigaussian method. The reliability of the method has been tested using both simulated data and HST observations of the same field. The distribution of the half light radii is peaked at r_{hl} 0.3 arcsec, in good agreement with that derived from HST images at the same magnitude. An approximate morphological classification has been obtained using the asymmetry and concentration parameters. The intrinsic sizes of the galaxies are shown as a function of their redshifts and absolute magnitudes using photometric redshifts derived from the multicolor catalog. While the brighter galaxies with morphological parameters typical of the normal spirals show a flat distribution in the range r_{d}=1-6 kpc, the fainter population at 0.4<z<0.8 dominates at small sizes. To explore the significance of this behaviour, an analytical rendition of the standard CDM model for the disc size evolution has been computed. The model showing the best fit to the local luminosity function and the Tully-Fisher relation is able to reproduce at intermediate redshifts a size distribution in general agreement with the observations, although it tends to underestimate the number of galaxies fainter than M_B~ -19 with disk sizes r_d~ 1-2 kpc.Comment: 16 pages, 11 figures, ApJ in press, Dec 199
    • 

    corecore