11 research outputs found

    Estimation of Field Reliability Based on Aggregate Lifetime Data

    No full text
    <p>Because of the exponential distribution assumption, many reliability databases recorded data in an aggregate way. Instead of individual failure times, each aggregate data point is a summation of a series of collective failures representing the cumulative operating time of one component position from system commencement to the last component replacement. The data format is different from traditional lifetime data and the statistical inference is challenging. We first model the individual component lifetime by a gamma distribution. Confidence intervals for the gamma shape parameter can be constructed using a scaled χ<sup>2</sup> approximation to a modified ratio of the geometric mean to the arithmetic mean, while confidence intervals for the gamma rate and mean parameters, as well as quantiles, are obtained using the generalized pivotal quantity method. We then fit the data using the inverse Gaussian (IG) distribution, a useful lifetime model for failures caused by degradation. Procedures for point estimation and interval estimation of parameters are developed. We also propose an interval estimation method for the quantiles of an IG distribution based on the generalized pivotal quantity method. An illustrative example demonstrates the proposed inference methods. Supplementary materials for this article are available online.</p

    A Multivariate Stochastic Degradation Model for Dependent Performance Characteristics

    No full text
    Simultaneous degradation of multiple dependent performance characteristics (PCs) is a common phenomenon for industrial products. The associated degradation modeling is of practical importance yet challenging. The dependence of the PCs can usually be attributed to two sources, one being the overall system health status and the other the common operating environments. Based on the observation, this study proposes a parsimonious multivariate Wiener process model whose number of parameters increases linearly with the dimension. We introduce a common stochastic time scale shared by all the PCs to model the dependence from the dynamic operating environment. Conditional on the time scale, the degradation of each PC is modeled as the sum of two independent Wiener processes, where one represents the common effects shared by all the PCs, and the other represents degradation caused by randomness unique to this PC. An EM algorithm is developed for model parameter estimation, and extensive simulations are implemented to validate the proposed model and the algorithms. For efficient reliability evaluation under a multivariate degradation model, including the proposed one, a bridge sampling-based algorithm is further developed. The applicability and the advantages of the proposed methods are demonstrated using a multivariate degradation dataset of a coating material. Supplementary materials for this article are available online.</p

    Augmenting the Unreturned for Field Data With Information on Returned Failures Only

    Get PDF
    <p>Field data are an important source of reliability information for many commercial products. Because field data are often collected by the maintenance department, information on failed and returned units is well maintained. Nevertheless, information on unreturned units is generally unavailable. The unavailability leads to truncation in the lifetime data. This study proposes a data-augmentation algorithm for this type of truncated field return data with returned failures available only. The algorithm is based on an idea to reveal the hidden unobserved lifetimes. Theoretical justifications of the procedure for augmenting the hidden unobserved are given. On the other hand, the algorithm is iterative in nature. Asymptotic properties of the estimators from the iterations are investigated. Both point estimation and the information matrix of the parameters can be directly obtained from the algorithm. In addition, a by-product of the algorithm is a nonparametric estimator of the installation time distribution. An example from an asset-rich company is given to demonstrate the proposed methods. Supplementary materials for this article are available online.</p

    Augmenting the Unreturned for Field Data With Information on Returned Failures Only

    No full text
    <p>Field data are an important source of reliability information for many commercial products. Because field data are often collected by the maintenance department, information on failed and returned units is well maintained. Nevertheless, information on unreturned units is generally unavailable. The unavailability leads to truncation in the lifetime data. This study proposes a data-augmentation algorithm for this type of truncated field return data with returned failures available only. The algorithm is based on an idea to reveal the hidden unobserved lifetimes. Theoretical justifications of the procedure for augmenting the hidden unobserved are given. On the other hand, the algorithm is iterative in nature. Asymptotic properties of the estimators from the iterations are investigated. Both point estimation and the information matrix of the parameters can be directly obtained from the algorithm. In addition, a by-product of the algorithm is a nonparametric estimator of the installation time distribution. An example from an asset-rich company is given to demonstrate the proposed methods. Supplementary materials for this article are available online.</p

    Managing component degradation in series systems for balancing degradation through reallocation and maintenance

    No full text
    In a physical system, components are usually installed in fixed positions that are known as operating slots. Due to such reasons as user behavior and imbalanced workload, a component’s degradation can be affected by the corresponding installation position in the system. As a result, components degradation levels can be significantly different even when the components come from a homogeneous population. Dynamic reallocation of the components among the installation positions is a feasible way to balance the extent of the degradation, and hence, extend the time from system installation to its replacement. In this study, we quantify the benefit of incorporating reallocation into the condition-based maintenance framework for series systems. The degradation of components in the system is modeled as a multivariate Wiener process, where the correlation between the degradation is considered. Under the periodic inspection framework, the optimal control limits for reallocation and preventive replacement are investigated. We first propose a reallocation policy of two-component systems, where the degradation process with reallocation and replacement is formulated as a semi-regenerative process. Then the long-run average operational cost is computed based on the stationary distribution of its embedded Markov chain. We then generalize the model to general series systems and use Monte Carlo simulations to approximate the maintenance cost. The optimal thresholds for reallocation and replacement are obtained from a stochastic response surface method using a stochastic kriging model. We further generalize the model to the scenario of an unknown degradation rate associated with each slot. The proposed model is applied to the tire system of a car and the battery system of hybrid-electric vehicles, where we show that the reallocation policy is capable of significantly reducing the system’s long-run average operational cost.</p

    Statistical Modeling of Multivariate Destructive Degradation Tests With Blocking

    No full text
    In degradation tests, the test units are usually divided into several groups, with each group tested simultaneously in a test rig. Each rig constitutes a rig-layer block from the perspective of design of experiments. Within each rig, the test units measured at the same time further form a gauge-layer block. Due to the uncontrollable factors among test rigs and the common errors incurred for each measurement, the degradation measurements of the test units may differ among various blocks. On the other hand, the degradation should be more homogeneous within a block. Motivated by an application of emerging contaminants (ECs), this study proposes a multivariate statistical model to account for the two-layer block effects in destructive degradation tests. A multivariate Wiener process is first used to model the correlation among different dimensions of degradation. The rig-layer block effect is modeled by a one-dimensional frailty motivated by the degradation physics, while the gauge-layer block effect at each measurement epoch is captured by a common additive measurement error. We develop an expectation-maximization algorithm to obtain the point estimates of the model parameters and construct confidence intervals for the parameters. A procedure is proposed to test significance of the block effects in the degradation data. Through a case study on an EC degradation dataset, we show the existence of the two-layer block effects from the test. By making use of the proposed model, decision makers can readily make risk assessment of each contaminant and determine the minimal water treatment time for removal of the contaminants. Supplementary materials for this article are available online.</p

    Minimum Distance Estimation for the Generalized Pareto Distribution

    No full text
    The generalized Pareto distribution (GPD) is widely used for extreme values over a threshold. Most existing methods for parameter estimation either perform unsatisfactorily when the shape parameter k is larger than 0.5, or they suffer from heavy computation as the sample size increases. In view of the fact that k > 0.5 is occasionally seen in numerous applications, including two illustrative examples used in this study, we remedy the deficiencies of existing methods by proposing two new estimators for the GPD parameters. The new estimators are inspired by the minimum distance estimation and the M-estimation in the linear regression. Through comprehensive simulation, the estimators are shown to perform well for all values of k under small and moderate sample sizes. They are comparable to the existing methods for k k > 0.5.</p

    A Covariate-Regulated Sparse Subspace Learning Model and Its Application to Process Monitoring and Fault Isolation

    No full text
    Multivariate functional data are increasingly common in various applications. The cross-correlation of different process variables is typically complex in that a variable might be weakly correlated or not correlated with most of the other variables, and the cross-correlation is time-varying and might be regulated by some exogenous covariates. To address these two challenges, we propose a covariate-regulated sparse subspace learning (CSSL) model. We consider the scenario that these process variables lie in multiple subspaces, and only process variables from the same subspace are cross-correlated with each other. To take into account the effect of the exogenous covariates on the subspace structure, we partition the domain of the covariates into a number of regions. In each region, the subspace structure is treated as constant and can be learned independently. An efficient decision-tree-based algorithm is then proposed to obtain the solution. The proposed method can be further applied to process monitoring and fault isolation for multivariate processes. The efficacy of this method is demonstrated by comprehensive simulations and a case study on a dataset from the supervisory control and data acquisition (SCADA) system of the wind turbine.</p

    Spatio-Temporal Analysis and Prediction of Mass Telecommunication Base Station Failure Events

    No full text
    Large-scale telecommunication systems are lifeline infrastructure in modern society. A telecommunication system typically consists of a huge number of base stations with diverse geographical locations across a country, which highly complicates maintenance operations. To allocate maintenance resource properly, it is important to have a good understanding on the failure pattern of these base stations. Statistical inference of recurrent failures of these base stations is challenging because of the large number of base stations and the spatial correlation of their failure processes. Based on eight-month failure data of telecommunication base stations in Harbin, China, we propose a customized non-homogeneous Poisson process (NHPP) model for recurrent failure data from telecommunication systems. The model consists of two layers, where the temporal layer applies an NHPP with station-specific frailty for failures of each base station, and the spatial layer uses a multivariate lognormal distribution to characterize the correlation among the frailties. The Monte Carlo EM (MCEM) algorithm is applied to estimate parameters included in the proposed model. We demonstrate the proposed model using the Harbin telecommunication system example with 7,725 base stations and 4,615 failure records. Supplementary materials for this article are available online.</p

    Statistical Modeling of the Effectiveness of Preventive Maintenance for Repairable Systems

    No full text
    Preventive maintenance (PM) is commonly adopted in practice to improve a system’s health condition and reduce the risk of unexpected failures. When a PM action is poorly performed, however, it is likely to have adverse effects on system reliability. We observe this phenomenon when evaluating the effectiveness of a PM program for a fleet of service vehicles based on their four-year operating data. This phenomenon is also commonly reported in the maintenance of vehicles and aircraft. Motivated by this observation, we propose a statistical model for repairable systems by taking potential PM adverse effects into account. In the formulation, the baseline failure process without PM effects is modeled by a nonhomogeneous Poisson process. When a PM action is performed, its effect on the failure process is modeled as a multiplicative random effect on the system rate of occurrence of failures. Statistical inference under the proposed model is discussed, and we further develop goodness-of-fit test procedures to validate the adequacy of this model. The above-mentioned service vehicle operating data are used to demonstrate the proposed methods.</p
    corecore