4,109 research outputs found

    Big Data and Reliability Applications: The Complexity Dimension

    Full text link
    Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can generate big data that can be used in reliability analysis. Meeker and Hong (2014) focused on large scale system operating and environment data (i.e., high-frequency multivariate time series data), and provided examples on how to link such data as covariates to traditional reliability responses such as time to failure, time to recurrence of events, and degradation measurements. This paper intends to extend that discussion by focusing on how to use data with complicated structures to do reliability analysis. Such data types include high-dimensional sensor data, functional curve data, and image streams. We first provide a review of recent development in those directions, and then we provide a discussion on how analytical methods can be developed to tackle the challenging aspects that arise from the complexity feature of big data in reliability applications. The use of modern statistical methods such as variable selection, functional data analysis, scalar-on-image regression, spatio-temporal data models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure

    Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary

    Full text link
    Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.Comment: Published at http://dx.doi.org/10.1214/088342306000000330 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model

    Full text link
    In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products' lifetime distribution

    Statistical Degradation Models for Electronics

    Get PDF
    With increasing presence of electronics in modern systems and in every-day products, their reliability is inextricably dependent on that of their electronics. We develop reliability models for failure-time prediction under small failure-time samples and information on individual degradation history. The development of the model extends the work of Whitmore et al. 1998, to incorporate two new data-structures common to reliability testing. Reliability models traditionally use lifetime information to evaluate the reliability of a device or system. To analyze small failure-time samples within dynamic environments where failure mechanisms are unknown, there is a need for models that make use of auxiliary reliability information. In this thesis we present models suitable for reliability data, where degradation variables are latent and can be tracked by related observable variables we call markers. We provide an engineering justification for our model and develop parametric and predictive inference equations for a data-structure that includes terminal observations of the degradation variable and longitudinal marker measurements. We compare maximum likelihood estimation and prediction results obtained by Whitmore et. al. 1998 and show improvement in inference under small sample sizes. We introduce modeling of variable failure thresholds within the framework of bivariate degradation models and discuss ways of incorporating covariates. In the second part of the thesis we investigate anomaly detection through a Bayesian support vector machine and discuss its place in degradation modeling. We compute posterior class probabilities for time-indexed covariate observations, which we use as measures of degradation. Lastly, we present a multistate model used to model a recurrent event process and failure-times. We compute the expected time to failure using counting process theory and investigate the effect of the event process on the expected failure-time estimates

    Accelerated degradation modeling considering long-range dependence and unit-to-unit variability

    Full text link
    Accelerated degradation testing (ADT) is an effective way to evaluate the reliability and lifetime of highly reliable products. Existing studies have shown that the degradation processes of some products are non-Markovian with long-range dependence due to the interaction with environments. Besides, the degradation processes of products from the same population generally vary from each other due to various uncertainties. These two aspects bring great difficulty for ADT modeling. In this paper, we propose an improved ADT model considering both long-range dependence and unit-to-unit variability. To be specific, fractional Brownian motion (FBM) is utilized to capture the long-range dependence in the degradation process. The unit-to-unit variability among multiple products is captured by a random variable in the degradation rate function. To ensure the accuracy of the parameter estimations, a novel statistical inference method based on expectation maximization (EM) algorithm is proposed, in which the maximization of the overall likelihood function is achieved. The effectiveness of the proposed method is fully verified by a simulation case and a microwave case. The results show that the proposed model is more suitable for ADT modeling and analysis than existing ADT models

    Updated Operational Reliability from Degradation Indicators and Adaptive Maintenance Strategy

    Get PDF
    This chapter is dedicated to the reliability and maintenance of assets that are characterized by a degradation process. The item state is related to a degradation mechanism that represents the unit-to-unit variability and time-varying dynamics of systems. The maintenance scheduling has to be updated considering the degradation history of each item. The research method relies on the updating process of the reliability of a specific asset. Given a degradation process and costs for preventive/corrective maintenance actions, an optimal inspection time is obtained. At this time, the degradation level is measured and a prediction of the degradation is conducted to obtain the next inspection time. A decision criterion is established to decide whether the maintenance action should take place at the current time or postpone. Consequently, there is an optimal number of inspections that allows to extend the useful life of an asset before performing the preventive maintenance action. A numerical case study involving a non-stationary Wiener-based degradation process is proposed as an illustration of the methodology. The results showed that the expected cost per unit of time considering the adaptive maintenance strategy is lower than the expected cost per unit of time obtained for other maintenance policies

    LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Get PDF
    Light emitting diode (LED) lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color), it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC) method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method
    corecore