34,349 research outputs found

    Statistical Reliability with Applications

    Get PDF
    This chapter reviews fundamental ideas in reliability theory and inference. The first part of the chapter accounts for lifetime distributions that are used in engineering reliability analyis, including general properties of reliability distributions that pertain to lifetime for manufactured products. Certain distributions are formulated on the basis of simple physical properties, and other are more or less empirical. The first part of the chapter ends with a description of graphical and analytical methods to find appropriate lifetime distributions for a set of failure data. The second part of the chapter describes statistical methods for analyzing reliability data, including maximum likelihood estimation and likelihood ratio testing. Degradation data are more prevalent in experiments in which failure is rare and test time is limited. Special regression techniques for degradation data can be used to draw inference on the underlying lifetime distribution, even if failures are rarely observed. The last part of the chapter discusses reliability for systems. Along with the components that comprise the system, reliability analysis must take account of the system configuration and (stochastic) component dependencies. System reliability is illustrated with an analysis of logistics systems (e.g., moving goods in a system of product sources and retail outlets). Robust reliability design can be used to construct a supply chain that runs with maximum efficiency or minimum cost

    Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary

    Full text link
    Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.Comment: Published at http://dx.doi.org/10.1214/088342306000000330 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Spatiotemporal Patterns and Predictability of Cyberattacks

    Get PDF
    Y.C.L. was supported by Air Force Office of Scientific Research (AFOSR) under grant no. FA9550-10-1-0083 and Army Research Office (ARO) under grant no. W911NF-14-1-0504. S.X. was supported by Army Research Office (ARO) under grant no. W911NF-13-1-0141. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Peer reviewedPublisher PD

    Optimal Experimental Planning of Reliability Experiments Based on Coherent Systems

    Get PDF
    In industrial engineering and manufacturing, assessing the reliability of a product or system is an important topic. Life-testing and reliability experiments are commonly used reliability assessment methods to gain sound knowledge about product or system lifetime distributions. Usually, a sample of items of interest is subjected to stresses and environmental conditions that characterize the normal operating conditions. During the life-test, successive times to failure are recorded and lifetime data are collected. Life-testing is useful in many industrial environments, including the automobile, materials, telecommunications, and electronics industries. There are different kinds of life-testing experiments that can be applied for different purposes. For instance, accelerated life tests (ALTs) and censored life tests are commonly used to acquire information in reliability and life-testing experiments with the presence of time and resource limitations. Statistical inference based on the data obtained from a life test and effectively planning a life-testing experiment subject to some constraints are two important problems statisticians are interested in. The experimental design problem for a life test has long been studied; however, the experimental planning considering putting the experimental units into systems for a life-test has not been studied. In this thesis, we study the optimal experimental planning problem in multiple stress levels life-testing experiments and progressively Type-II censored life-testing experiments when the test units can be put into coherent systems for the experiment. Based on the notion of system signature, a tool in structure reliability to represent the structure of a coherent system, under different experimental settings, models and assumptions, we derive the maximum likelihood estimators of the model parameters and the expected Fisher information matrix. Then, we use the expected Fisher information matrix to obtain the asymptotic variance-covariance matrix of the maximum likelihood estimators when nn-component coherent systems are used in the life-testing experiment. Based on different optimality criteria, such as DD-optimality, AA-optimality and VV-optimality, we obtain the optimal experimental plans under different settings. Numerical and Monte Carlo simulation studies are used to demonstrate the advantages and disadvantages of using systems in life-testing experiments
    corecore