43 research outputs found
New survival distributions that quantify the gain from eliminating flawed components
A general method for deriving new survival distributions from old is presented. This yields a class of useful mixture distributions. Fitting such distributions to failure-time data allows estimation of the improvement in reliability that could be gained from eliminating ‘frail’ components. One
model parameter is the proportional increase of expected survival time that could be achieved. Some 2 and 3 parameter distributions in this class are described, which are extensions of the Weibull, exponential, gamma and
lognormal distributions. The methodology is illustrated by fitting some well travelled datasets.
Keywords: Weibull distribution, gamma distribution, mixture distribution,
hazard function, partial integration, frailty mode
Flexible Birnbaum-Saunders distribution
In this paper, we propose a bimodal extension of the Birnbaum–Saunders model by including an extra parameter. This new model is termed flexible Birnbaum–Saunders (FBS) and includes the ordinary Birnbaum–Saunders (BS) and the skew Birnbaum–Saunders (SBS) model as special cases. Its properties are studied. Parameter estimation is considered via an iterative maximum likelihood approach. Two real applications, of interest in environmental sciences, are included, which reveal that our proposal can perform better than other competing models.Ministerio de Economía y Competitividad (MINECO). Españ
Symmetric and Asymmetric Distributions
In recent years, the advances and abilities of computer software have substantially increased the number of scientific publications that seek to introduce new probabilistic modelling frameworks, including continuous and discrete approaches, and univariate and multivariate models. Many of these theoretical and applied statistical works are related to distributions that try to break the symmetry of the normal distribution and other similar symmetric models, mainly using Azzalini's scheme. This strategy uses a symmetric distribution as a baseline case, then an extra parameter is added to the parent model to control the skewness of the new family of probability distributions. The most widespread and popular model is the one based on the normal distribution that produces the skewed normal distribution. In this Special Issue on symmetric and asymmetric distributions, works related to this topic are presented, as well as theoretical and applied proposals that have connections with and implications for this topic. Immediate applications of this line of work include different scenarios such as economics, environmental sciences, biometrics, engineering, health, etc. This Special Issue comprises nine works that follow this methodology derived using a simple process while retaining the rigor that the subject deserves. Readers of this Issue will surely find future lines of work that will enable them to achieve fruitful research results
Statistical Modeling: Regression, Survival Analysis, and Time Series Analysis
Statistical Modeling provides an introduction to regression, survival analysis, and time series analysis for students who have completed calculus-based courses in probability and mathematical statistics. The book uses the R language to fit statistical models, conduct Monte Carlo simulation experiments and generate graphics. Over 300 exercises at the end of the chapters makes this an appropriate text for a class in statistical modeling.
Part 1: RegressionChapter 1: Simple Linear Regression Chapter 2: Inference in Simple Linear Regression Chapter 3: Topics in RegressionPart II: Survival Analysis Chapter 4: Probability Models in Survival AnalysisChapter 5: Statistical Methods in Survival Analysis Chapter 6: Topics in Survival Analysis Part III: Time Series Analysis Chapter 7: Basic Methods in Time Series AnalysisChapter 8: Modeling in Time Series Analysis Chapter 9: Topics in Time Series Analysi
Practical Methods for Optimizing Equipment Maintenance Strategies Using an Analytic Hierarchy Process and Prognostic Algorithms
Many large organizations report limited success using Condition Based Maintenance (CbM). This work explains some of the causes for limited success, and recommends practical methods that enable the benefits of CbM. The backbone of CbM is a Prognostics and Health Management (PHM) system. Use of PHM alone does not ensure success; it needs to be integrated into enterprise level processes and culture, and aligned with customer expectations. To integrate PHM, this work recommends a novel life cycle framework, expanding the concept of maintenance into several levels beginning with an overarching maintenance strategy and subordinate policies, tactics, and PHM analytical methods. During the design and in-service phases of the equipment’s life, an organization must prove that a maintenance policy satisfies specific safety and technical requirements, business practices, and is supported by the logistic and resourcing plan to satisfy end-user needs and expectations. These factors often compete with each other because they are designed and considered separately, and serve disparate customers. This work recommends using the Analytic Hierarchy Process (AHP) as a practical method for consolidating input from stakeholders and quantifying the most preferred maintenance policy. AHP forces simultaneous consideration of all factors, resolving conflicts in the trade-space of the decision process. When used within the recommended life cycle framework, it is a vehicle for justifying the decision to transition from generalized high-level concepts down to specific lower-level actions. This work demonstrates AHP using degradation data, prognostic algorithms, cost data, and stakeholder input to select the most preferred maintenance policy for a paint coating system. It concludes the following for this particular system: A proactive maintenance policy is most preferred, and a predictive (CbM) policy is more preferred than predeterminative (time-directed) and corrective policies. A General Path prognostic Model with Bayesian updating (GPM) provides the most accurate prediction of the Remaining Useful Life (RUL). Long periods between inspections and use of categorical variables in inspection reports severely limit the accuracy in predicting the RUL. In summary, this work recommends using the proposed life cycle model, AHP, PHM, a GPM model, and embedded sensors to improve the success of a CbM policy
Inference procedures for the piecewise exponential model when the data are arbitrarily censored
Lifetime data are often subject to complicated censoring mechanisms. In particular, point inspection schedules result in observations for which the exact failure times are known only to fall in an interval. Furthermore, overlapping intervals occur when more than one inspection schedule is employed. While well-known parametric and nonparametric inference procedures exist, the piecewise exponential (PEX) model provides a flexible alternative. The PEX model is characterized by a piecewise-constant hazard function with specified jump points. The jump points may be determined as a function of the data, giving the model a nonparametric interpretation, or according to physical considerations related to the process but independent of the data. Assumptions concerning the shape of the hazard function can be incorporated into the model;The EM algorithm provides a useful method of estimation, particularly as the number of hazard jump points increases. Its convergence is guaranteed even when the MLE lies on the boundary of the parameter space. A version of the EM algorithm is used to construct approximate confidence intervals based on inverting the likelihood ratio test statistic. Asymptotic properties of the PEX estimator are given for certain censoring mechanisms. A Monte Carlo study was done to investigate the effect of a constrained hazard function and of the choice of jump points on the resulting estimate of the survival function. The performance of the likelihood ratio based confidence intervals is also evaluated
Impact of pressure fluctuations on pipe failures in water distribution networks
Water utilities operate ageing infrastructures that are degraded by environmental factors and operational stresses. Pipe failures have become a routine, resulting in major interruptions and extensive costs to the society.
Pipe failure is a result of complex interactions between a variety of factors contributing to the pipe’s structural degradation and ultimate failure. Previous studies have extensively reviewed environmental and pipe-related factors. However, long term impact of quasi-steady and unsteady pressure variations on metallic pipe failures is not fully investigated.
The overall aim of this project is to enhance the understanding of the dynamic pressure variations in water supply networks and evaluate their impact on pipe failures, with motivation to enhance the operational efficiency of water supply infrastructures by managing systems’ hydraulic conditions.
In this study, a large-scale survey sampling programme is designed and executed in order to gather network representative high frequency pressure samples. A metric is formulated to quantify the stresses imposed on pipes from pressure variations. Causal analysis is undertaken and relationship between pipe failure and predictor variables are investigated by developing logistic regression models. The study develops a methodology for investigating cost-effectiveness of intervention measures and economic justification of the calm networks.
The findings from the study illustrate positive associations between the system’s hydraulic variations and predicted probability of pipe failure. It is shown that deterioration models can be enhanced by including pressure variation characteristics as contributing factors to pipe degradation. Investment in achieving calm networks is demonstrated to be economically justifiable.Open Acces
Recommended from our members
Novel regularization models for dynamic and discrete response data
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonRegularized regression models have gained popularity in recent years. The addition of a penalty term to the likelihood function allows parameter estimation where traditional methods fail, such as in the p » n case. The use of an l1 penalty in particular leads to simultaneous parameter estimation and variable selection, which is rather convenient in practice. Moreover, computationally efficient algorithms make these methods really attractive in many applications. This thesis is inspired by this literature and investigates the development of novel penalty functions and regression methods within this context. In particular, Chapter 2 deals with linear models for time-dependent response and explanatory variables. This is beyond the independent framework which is common to many of the developed regularized regression models. We propose to account for the time dependency in the data by explicitly adding autoregressive terms to the response variable together with an autoregressive process for the residuals. In addition, the use of a l1 penalized likelihood approach for parameter estimation leads to automatic order and variable selection and makes this method feasible for high-dimensional data. Theoretical properties of the estimators are provided and an extensive simulation study is performed. Finally, we show the application of the model on air pollution and stock market data and discuss its implementation in the R package DREGAR, which is freely available in CRAN. In Chapter 3, we develop a new penalty function. Despite all the advantages of the l1 penalty, this penalty is not differentiable at zero, and neither are the alternatives that are proposed in the literature. The only exception is the ridge penalty, which does not lead to variable selection. Motivated by this gap, and noting the advantages that a differentiable penalty can give, such as increased computational efficiency in some cases and the derivation of more accurate model selection criteria, we develop a new penalty function based on the error function. We study the theoretical properties of this function and of the estimators obtained in a regularized regression context. Finally, we perform a simulation study and we use the new penalty to analyse a diabetes and prostate cancer dataset. The new method is implemented in the R package DLASSO, that is freely available in CRAN. Finally, Chapter 4 deals with regression models for discrete response data, which is frequently collected in many application areas. In particular, we consider a discrete Weibull regression model that has recently been introduced in the literature. In this chapter, we propose the first Bayesian implementation of this model. We consider a general parametrization, where both parameters of the discrete Weibull distribution can be conditioned on the predictors, and show theoretically how, under a uniform noninformative
prior, the posterior distribution is proper with finite moments. In addition, we consider closely the case of Laplace priors for parameter shrinkage and variable selection. A simulation study and the analysis of four real datasets of medical records show the applicability of this approach to the analysis of count data. The method is implemented in the R package BDWreg, which is freely available in CRAN
Current Topics on Risk Analysis: ICRA6 and RISK2015 Conference
Peer ReviewedPostprint (published version