2,822 research outputs found

    Big Data and Reliability Applications: The Complexity Dimension

    Full text link
    Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can generate big data that can be used in reliability analysis. Meeker and Hong (2014) focused on large scale system operating and environment data (i.e., high-frequency multivariate time series data), and provided examples on how to link such data as covariates to traditional reliability responses such as time to failure, time to recurrence of events, and degradation measurements. This paper intends to extend that discussion by focusing on how to use data with complicated structures to do reliability analysis. Such data types include high-dimensional sensor data, functional curve data, and image streams. We first provide a review of recent development in those directions, and then we provide a discussion on how analytical methods can be developed to tackle the challenging aspects that arise from the complexity feature of big data in reliability applications. The use of modern statistical methods such as variable selection, functional data analysis, scalar-on-image regression, spatio-temporal data models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure

    A non-Gaussian continuous state space model for asset degradation

    Get PDF
    The degradation model plays an essential role in asset life prediction and condition based maintenance. Various degradation models have been proposed. Within these models, the state space model has the ability to combine degradation data and failure event data. The state space model is also an effective approach to deal with the multiple observations and missing data issues. Using the state space degradation model, the deterioration process of assets is presented by a system state process which can be revealed by a sequence of observations. Current research largely assumes that the underlying system development process is discrete in time or states. Although some models have been developed to consider continuous time and space, these state space models are based on the Wiener process with the Gaussian assumption. This paper proposes a Gamma-based state space degradation model in order to remove the Gaussian assumption. Both condition monitoring observations and failure events are considered in the model so as to improve the accuracy of asset life prediction. A simulation study is carried out to illustrate the application procedure of the proposed model

    Accelerated degradation modeling considering long-range dependence and unit-to-unit variability

    Full text link
    Accelerated degradation testing (ADT) is an effective way to evaluate the reliability and lifetime of highly reliable products. Existing studies have shown that the degradation processes of some products are non-Markovian with long-range dependence due to the interaction with environments. Besides, the degradation processes of products from the same population generally vary from each other due to various uncertainties. These two aspects bring great difficulty for ADT modeling. In this paper, we propose an improved ADT model considering both long-range dependence and unit-to-unit variability. To be specific, fractional Brownian motion (FBM) is utilized to capture the long-range dependence in the degradation process. The unit-to-unit variability among multiple products is captured by a random variable in the degradation rate function. To ensure the accuracy of the parameter estimations, a novel statistical inference method based on expectation maximization (EM) algorithm is proposed, in which the maximization of the overall likelihood function is achieved. The effectiveness of the proposed method is fully verified by a simulation case and a microwave case. The results show that the proposed model is more suitable for ADT modeling and analysis than existing ADT models

    LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Get PDF
    Light emitting diode (LED) lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color), it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC) method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method

    Statistical Degradation Models for Electronics

    Get PDF
    With increasing presence of electronics in modern systems and in every-day products, their reliability is inextricably dependent on that of their electronics. We develop reliability models for failure-time prediction under small failure-time samples and information on individual degradation history. The development of the model extends the work of Whitmore et al. 1998, to incorporate two new data-structures common to reliability testing. Reliability models traditionally use lifetime information to evaluate the reliability of a device or system. To analyze small failure-time samples within dynamic environments where failure mechanisms are unknown, there is a need for models that make use of auxiliary reliability information. In this thesis we present models suitable for reliability data, where degradation variables are latent and can be tracked by related observable variables we call markers. We provide an engineering justification for our model and develop parametric and predictive inference equations for a data-structure that includes terminal observations of the degradation variable and longitudinal marker measurements. We compare maximum likelihood estimation and prediction results obtained by Whitmore et. al. 1998 and show improvement in inference under small sample sizes. We introduce modeling of variable failure thresholds within the framework of bivariate degradation models and discuss ways of incorporating covariates. In the second part of the thesis we investigate anomaly detection through a Bayesian support vector machine and discuss its place in degradation modeling. We compute posterior class probabilities for time-indexed covariate observations, which we use as measures of degradation. Lastly, we present a multistate model used to model a recurrent event process and failure-times. We compute the expected time to failure using counting process theory and investigate the effect of the event process on the expected failure-time estimates

    Multiple-Change-Point Modeling and Exact Bayesian Inference of Degradation Signal for Prognostic Improvement

    Get PDF
    Prognostics play an increasingly important role in modern engineering systems for smart maintenance decision-making. In parametric regression-based approaches, the parametric models are often too rigid to model degradation signals in many applications. In this paper, we propose a Bayesian multiple-change-point (CP) modeling framework to better capture the degradation path and improve the prognostics. At the offline modeling stage, a novel stochastic process is proposed to model the joint prior of CPs and positions. All hyperparameters are estimated through an empirical two-stage process. At the online monitoring and remaining useful life (RUL) prediction stage, a recursive updating algorithm is developed to exactly calculate the posterior distribution and RUL prediction sequentially. To control the computational cost, a fixed-support-size strategy in the online model updating and a partial Monte Carlo strategy in the RUL prediction are proposed. The effectiveness and advantages of the proposed method are demonstrated through thorough simulation and real case studies

    Updated Operational Reliability from Degradation Indicators and Adaptive Maintenance Strategy

    Get PDF
    This chapter is dedicated to the reliability and maintenance of assets that are characterized by a degradation process. The item state is related to a degradation mechanism that represents the unit-to-unit variability and time-varying dynamics of systems. The maintenance scheduling has to be updated considering the degradation history of each item. The research method relies on the updating process of the reliability of a specific asset. Given a degradation process and costs for preventive/corrective maintenance actions, an optimal inspection time is obtained. At this time, the degradation level is measured and a prediction of the degradation is conducted to obtain the next inspection time. A decision criterion is established to decide whether the maintenance action should take place at the current time or postpone. Consequently, there is an optimal number of inspections that allows to extend the useful life of an asset before performing the preventive maintenance action. A numerical case study involving a non-stationary Wiener-based degradation process is proposed as an illustration of the methodology. The results showed that the expected cost per unit of time considering the adaptive maintenance strategy is lower than the expected cost per unit of time obtained for other maintenance policies

    Models for Data Analysis in Accelerated Reliability Growth

    Get PDF
    This work develops new methodologies for analyzing accelerated testing data in the context of a reliability growth program for a complex multi-component system. Each component has multiple failure modes and the growth program consists of multiple test-fix stages with corrective actions applied at the end of each stage. The first group of methods considers time-to-failure data and test covariates for predicting the final reliability of the system. The time-to-failure of each failure mode is assumed to follow a Weibull distribution with rate parameter proportional to an acceleration factor. Acceleration factors are specific to each failure mode and test covariates. We develop a Bayesian methodology to analyze the data by assigning a prior distribution to each model parameter, developing a sequential Metropolis-Hastings procedure to sample the posterior distribution of the model parameters, and deriving closed form expressions to aggregate component reliability information to assess the reliability of the system. The second group of methods considers degradation data for predicting the final reliability of a system. First, we provide a non-parametric methodology for a single degradation process. The methodology utilizes functional data analysis to predict the mean time-to-degradation function and Gaussian processes to capture unit-specific deviations from the mean function. Second, we develop parametric model for a component with multiple dependent monotone degradation processes. The model considers random effects on the degradation parameters and a parametric life-stress relationship. The assumptions are that degradation increments follow an Inverse Gaussian process and a Copula function captures the dependency between them. We develop a Bayesian and a maximum likelihood procedure for estimating the model parameters using a two-stage process: (1) estimate the parameters of the degradation processes as if they were independent and (2) estimate the parameters of the Copula function using the estimated cumulative distribution function of the observed degradation increments as observed data. Simulation studies show the efficacy of the proposed methodologies for analyzing multi-stage reliability growth data
    • …
    corecore