57,132 research outputs found

    Models for Data Analysis in Accelerated Reliability Growth

    Get PDF
    This work develops new methodologies for analyzing accelerated testing data in the context of a reliability growth program for a complex multi-component system. Each component has multiple failure modes and the growth program consists of multiple test-fix stages with corrective actions applied at the end of each stage. The first group of methods considers time-to-failure data and test covariates for predicting the final reliability of the system. The time-to-failure of each failure mode is assumed to follow a Weibull distribution with rate parameter proportional to an acceleration factor. Acceleration factors are specific to each failure mode and test covariates. We develop a Bayesian methodology to analyze the data by assigning a prior distribution to each model parameter, developing a sequential Metropolis-Hastings procedure to sample the posterior distribution of the model parameters, and deriving closed form expressions to aggregate component reliability information to assess the reliability of the system. The second group of methods considers degradation data for predicting the final reliability of a system. First, we provide a non-parametric methodology for a single degradation process. The methodology utilizes functional data analysis to predict the mean time-to-degradation function and Gaussian processes to capture unit-specific deviations from the mean function. Second, we develop parametric model for a component with multiple dependent monotone degradation processes. The model considers random effects on the degradation parameters and a parametric life-stress relationship. The assumptions are that degradation increments follow an Inverse Gaussian process and a Copula function captures the dependency between them. We develop a Bayesian and a maximum likelihood procedure for estimating the model parameters using a two-stage process: (1) estimate the parameters of the degradation processes as if they were independent and (2) estimate the parameters of the Copula function using the estimated cumulative distribution function of the observed degradation increments as observed data. Simulation studies show the efficacy of the proposed methodologies for analyzing multi-stage reliability growth data

    A load sharing system reliability model with managed component degradation

    Get PDF
    Motivated by an industrial problem affecting a water utility, we develop a model for a load sharing system where an operator dispatches work load to components in a manner that manages their degradation. We assume degradation is the dominant failure type, and that the system will not be subject to sudden failure due to a shock. By deriving the time to degradation failure of the system, estimates of system probability of failure are generated, and optimal designs can be obtained to minimize the long run average cost of a future system. The model can be used to support asset maintenance and design decisions. Our model is developed under a common set of core assumptions. That is, the operator allocates work to balance the level of the degradation condition of all components to achieve system performance. A system is assumed to be replaced when the cumulative work load reaches some random threshold. We adopt cumulative work load as the measure of total usage because it represents the primary cause of component degradation. We model the cumulative work load of the system as a monotone increasing and stationary stochastic process. The cumulative work load to degradation failure of a component is assumed to be inverse Gaussian distributed. An example, informed by an industry problem, is presented to illustrate the application of the model under different operating scenarios

    A non-Gaussian continuous state space model for asset degradation

    Get PDF
    The degradation model plays an essential role in asset life prediction and condition based maintenance. Various degradation models have been proposed. Within these models, the state space model has the ability to combine degradation data and failure event data. The state space model is also an effective approach to deal with the multiple observations and missing data issues. Using the state space degradation model, the deterioration process of assets is presented by a system state process which can be revealed by a sequence of observations. Current research largely assumes that the underlying system development process is discrete in time or states. Although some models have been developed to consider continuous time and space, these state space models are based on the Wiener process with the Gaussian assumption. This paper proposes a Gamma-based state space degradation model in order to remove the Gaussian assumption. Both condition monitoring observations and failure events are considered in the model so as to improve the accuracy of asset life prediction. A simulation study is carried out to illustrate the application procedure of the proposed model

    Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary

    Full text link
    Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.Comment: Published at http://dx.doi.org/10.1214/088342306000000330 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging

    Full text link
    Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints. Full size images are available as HAL technical report hal-01107519v5, IEEE Transactions on Computational Imaging, 201

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Learning Deep CNN Denoiser Prior for Image Restoration

    Full text link
    Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.Comment: Accepted to CVPR 2017. Code: https://github.com/cszn/ircn

    A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems

    Full text link
    Non-Local Total Variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the Structure Tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the non-local variations, jointly for the different components, through various ℓ1,p\ell_{1,p} matrix norms with p≥1p \ge 1. To facilitate the choice of the hyper-parameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented thanks to the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for multispectral and hyperspectral images. The results demonstrate the interest of introducing a non-local structure tensor regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods
    • …
    corecore