56,264 research outputs found

    Model selection with application to gamma process and inverse Gaussian process

    Get PDF
    The gamma process and the inverse Gaussian process are widely used in condition-based maintenance. Both are suitable for modelling monotonically increasing degradation processes. One challenge for practitioners is determining which of the two processes is most appropriate in light of a real data set. A common practice is to select the one with a larger maximized likelihood. However, due to variations in the data, the maximized likelihood of the “wrong” model could be larger than that of the “right” model. This paper proposes an efficient and broadly applicable test statistic for model selection. The construction of the test statistic is based on the Fisher information. Extensive numerical study is conducted to indicate the conditions under which the gamma process can be well approximated by the inverse Gaussian process, or the other way around

    A Perturbed Inverse Gaussian Process Model with Time Varying Variance-To-Mean Ratio

    Get PDF
    International audienceThe inverse gaussian (IG) process has become a common model for reliability analysis of monotonic degradation processes. The traditional IG process model assumes that the degradation increment follows an IG distribution, and the variance-to-mean ratio (VMR) is constant with time. However, for the degradation paths of some practical applications, e.g., the GaAs laser degradation data that motivated to propose the IG process, the VMR is actually time varying. Confronted with this, we propose an IG process model with measurement errors that depend on the actual degradation level. According to different forms or parameter values of the dependence function, the VMR of the degradation paths can display different time varying patterns. The maximum likelihood estimation method is developed in a step-by-step way, combined with numerical integration method and heuristic optimization method. Finally, the GaAs laser example is revisited to illustrate the effectiveness of the proposed model, which indicates that the introduction of statistically dependent measurement error can provide better fitting results and lifetime evaluation performance

    Models for Data Analysis in Accelerated Reliability Growth

    Get PDF
    This work develops new methodologies for analyzing accelerated testing data in the context of a reliability growth program for a complex multi-component system. Each component has multiple failure modes and the growth program consists of multiple test-fix stages with corrective actions applied at the end of each stage. The first group of methods considers time-to-failure data and test covariates for predicting the final reliability of the system. The time-to-failure of each failure mode is assumed to follow a Weibull distribution with rate parameter proportional to an acceleration factor. Acceleration factors are specific to each failure mode and test covariates. We develop a Bayesian methodology to analyze the data by assigning a prior distribution to each model parameter, developing a sequential Metropolis-Hastings procedure to sample the posterior distribution of the model parameters, and deriving closed form expressions to aggregate component reliability information to assess the reliability of the system. The second group of methods considers degradation data for predicting the final reliability of a system. First, we provide a non-parametric methodology for a single degradation process. The methodology utilizes functional data analysis to predict the mean time-to-degradation function and Gaussian processes to capture unit-specific deviations from the mean function. Second, we develop parametric model for a component with multiple dependent monotone degradation processes. The model considers random effects on the degradation parameters and a parametric life-stress relationship. The assumptions are that degradation increments follow an Inverse Gaussian process and a Copula function captures the dependency between them. We develop a Bayesian and a maximum likelihood procedure for estimating the model parameters using a two-stage process: (1) estimate the parameters of the degradation processes as if they were independent and (2) estimate the parameters of the Copula function using the estimated cumulative distribution function of the observed degradation increments as observed data. Simulation studies show the efficacy of the proposed methodologies for analyzing multi-stage reliability growth data

    A non-Gaussian continuous state space model for asset degradation

    Get PDF
    The degradation model plays an essential role in asset life prediction and condition based maintenance. Various degradation models have been proposed. Within these models, the state space model has the ability to combine degradation data and failure event data. The state space model is also an effective approach to deal with the multiple observations and missing data issues. Using the state space degradation model, the deterioration process of assets is presented by a system state process which can be revealed by a sequence of observations. Current research largely assumes that the underlying system development process is discrete in time or states. Although some models have been developed to consider continuous time and space, these state space models are based on the Wiener process with the Gaussian assumption. This paper proposes a Gamma-based state space degradation model in order to remove the Gaussian assumption. Both condition monitoring observations and failure events are considered in the model so as to improve the accuracy of asset life prediction. A simulation study is carried out to illustrate the application procedure of the proposed model

    Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary

    Full text link
    Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.Comment: Published at http://dx.doi.org/10.1214/088342306000000330 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging

    Full text link
    Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints. Full size images are available as HAL technical report hal-01107519v5, IEEE Transactions on Computational Imaging, 201

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Learning Deep CNN Denoiser Prior for Image Restoration

    Full text link
    Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.Comment: Accepted to CVPR 2017. Code: https://github.com/cszn/ircn
    • …
    corecore