54 research outputs found

    Nonparametric approach to reliability and its applications

    Get PDF
    Reliability concepts are used by reliability engineers in the industry to perform systematic reliability studies for the identification and possible elimination of failure causes, quantification of failure occurrences and for the reduction of failure consequences. Apart from applications to mechanical, electronic systems and software, reliability concepts are heavily used in biomedicine to model and understand biological processes such as aging. The standard approach in estimating reliability measures is to assume that the underlying lifetime distribution is known, even if only approximately. When the assumed parametric model is valid, the accuracy of corresponding inferences made based on the estimated function is usually sufficient. However, when this is in doubt, use of a parametric approach could lead to inaccurate inferences. In the literature, this issue has been studied extensively. In such circumstances, estimating these reliability measures using nonparametric techniques has the advantage of flexibility as they generally impose less restriction on the underlying distribution of the life time variable. This thesis considers three popular reliability measures, namely, Reversed Hazard Rate (RHR), Expected Inactivity Time (EIT) and Mean Residual Life (MRL) functions and introduces new estimation methods based on a nonparametric technique called the fixed-design local polynomial regression method. Investigations were undertaken on the theoretical properties of these estimators such as their asymptotic bias, variance and distribution. Extensive simulations were carried out to investigate their performances. The thesis also introduces some novel hypothesis testing procedures for comparing between reliability measures based on samples from two populations using nonparametric techniques. Finally, these methods were applied to address various interesting problems in biomedicine and reliability engineering to demonstrate their practical utility

    Change-point Problem and Regression: An Annotated Bibliography

    Get PDF
    The problems of identifying changes at unknown times and of estimating the location of changes in stochastic processes are referred to as the change-point problem or, in the Eastern literature, as disorder . The change-point problem, first introduced in the quality control context, has since developed into a fundamental problem in the areas of statistical control theory, stationarity of a stochastic process, estimation of the current position of a time series, testing and estimation of change in the patterns of a regression model, and most recently in the comparison and matching of DNA sequences in microarray data analysis. Numerous methodological approaches have been implemented in examining change-point models. Maximum-likelihood estimation, Bayesian estimation, isotonic regression, piecewise regression, quasi-likelihood and non-parametric regression are among the methods which have been applied to resolving challenges in change-point problems. Grid-searching approaches have also been used to examine the change-point problem. Statistical analysis of change-point problems depends on the method of data collection. If the data collection is ongoing until some random time, then the appropriate statistical procedure is called sequential. If, however, a large finite set of data is collected with the purpose of determining if at least one change-point occurred, then this may be referred to as non-sequential. Not surprisingly, both the former and the latter have a rich literature with much of the earlier work focusing on sequential methods inspired by applications in quality control for industrial processes. In the regression literature, the change-point model is also referred to as two- or multiple-phase regression, switching regression, segmented regression, two-stage least squares (Shaban, 1980), or broken-line regression. The area of the change-point problem has been the subject of intensive research in the past half-century. The subject has evolved considerably and found applications in many different areas. It seems rather impossible to summarize all of the research carried out over the past 50 years on the change-point problem. We have therefore confined ourselves to those articles on change-point problems which pertain to regression. The important branch of sequential procedures in change-point problems has been left out entirely. We refer the readers to the seminal review papers by Lai (1995, 2001). The so called structural change models, which occupy a considerable portion of the research in the area of change-point, particularly among econometricians, have not been fully considered. We refer the reader to Perron (2005) for an updated review in this area. Articles on change-point in time series are considered only if the methodologies presented in the paper pertain to regression analysis

    Vol. 10, No. 2 (Full Issue)

    Get PDF

    A FRAMEWORK FOR SOFTWARE RELIABILITY MANAGEMENT BASED ON THE SOFTWARE DEVELOPMENT PROFILE MODEL

    Get PDF
    Recent empirical studies of software have shown a strong correlation between change history of files and their fault-proneness. Statistical data analysis techniques, such as regression analysis, have been applied to validate this finding. While these regression-based models show a correlation between selected software attributes and defect-proneness, in most cases, they are inadequate in terms of demonstrating causality. For this reason, we introduce the Software Development Profile Model (SDPM) as a causal model for identifying defect-prone software artifacts based on their change history and software development activities. The SDPM is based on the assumption that human error during software development is the sole cause for defects leading to software failures. The SDPM assumes that when a software construct is touched, it has a chance to become defective. Software development activities such as inspection, testing, and rework further affect the remaining number of software defects. Under this assumption, the SDPM estimates the defect content of software artifacts based on software change history and software development activities. SDPM is an improvement over existing defect estimation models because it not only uses evidence from current project to estimate defect content, it also allows software managers to manage software projects quantitatively by making risk informed decisions early in software development life cycle. We apply the SDPM in several real life software development projects, showing how it is used and analyzing its accuracy in predicting defect-prone files and compare the results with the Poisson regression model

    A Probabilistic Corrosion Model for Copper-Coated Used Nuclear Fuel Containers

    Get PDF
    Lifetime predictions of used nuclear fuel containers destined for permanent storage in Deep Geological Repositories (DGRs) are challenged by the uncertainty surrounding the environment and the performance of both containers and engineered barriers over repos- itory timescales. Much of the work to characterise the response of engineered barriers to postulated evolving environmental conditions and degradation mechanisms is limited to very short-term laboratory tests or at best in-situ large-scale experiments spanning less than a few decades. While much is learned from these test programmes, the fact remains that long-term performance of many tens of thousands of Used Fuel Containers (UFCs) across a timescale of 100,000 years or more cannot be estimated with a significant degree of confidence by extrapolating single point results of short-term experiments. This is par- ticularly true when there is a desire to understand the progression of container failures and the timing of contaminants subsequently released into the geosphere. Used Fuel Container (UFC) lifetime predictions require a probabilistic approach to address uncertainty. Accord- ingly, this thesis addresses three objectives. The first is to develop a probabilistic model to estimate the time to penetrate through the copper coating of a UFC, assuming sulphide- induced corrosion is the primary degradation mechanism of concern. Within this model, also develop a framework to account for the design of the Engineered Barrier System (EBS) and proposed repository layout. The second is to enhance the probabilistic corrosion model by integrating the potential effects of latent copper coating defects and the single temper- ature transient predicted for the repository. The third is to develop a stochastic process model for pitting corrosion, integrate the same into the sulphide-induced corrosion model, and estimate the time to penetrate through the copper coating based on both degradation mechanisms. To satisfy the first two objectives, this work presents a unique Monte Carlo probabilistic framework. With respect to the third objective, modelling pitting corrosion in copper under postulated repository environments poses a significant challenge since there is no relevant data and the likelihood of this mechanism remains a much debated topic. To overcome this challenge and facilitate demonstration of the approach to modelling pit growth, surrogate data is utilised. In addition to detailing various options for modelling pit growth, this work presents a novel and more transparent, self-contained approach to the estimation of the underlying process intensity when pit growth is modelled via a non- homogeneous Markov process. Finally, the combined effect of pitting and sulphide-induced corrosion on UFC copper-coating lifetimes is demonstrated. The modelling results are for the purpose of illustrating a potential methodology only

    Dynamic analysis of survival models and related processes.

    Get PDF
    This thesis presents new methods of analysis of survival data based on a Dynamic Bayesian approach. The models allow the parameters to change with time. The analysis is tractable and emphasises predictive aspects of the models. The survival problems covered include linear and non-linear regression, analysis of random samples, time-dependent covariates, life tables and competing risks. The analysis is also extended to a number of point processes. Numerical applications are provided and the microcomputer software to perform them is described

    FRAMEWORK FOR RELIABILITY, MAINTAINABILITY AND AVAILABILITY ANALYSIS OF GAS PROCESSING SYSTEM DURING OPERATION PHASE

    Get PDF
    In facing many operation challenges such as increased expectation in bottom line performances and escalating overhead costs, petrochemical plants nowadays need to continually strive for higher reliability and availability by means of effective improvement tools. Reliability, maintainability and availability (RAM) analysis has been recognised as one of the strategic tools to improve plant's reliability at operation phase. Nevertheless, the application of RAM among industrial practitioners is still limited generally due to the impracticality and complexity of existing approaches. Hence, it is important to enhance the approaches so that they can be practically applied by companies to assist them in achieving their operational goals. The objectives of this research are to develop frameworks for applying reliability, maintainability and availability analysis of gas processing system at operation phase to improve system operational and maintenance performances. In addition, the study focuses on ways to apply existing statistical approach and incorporate inputs from field experts for prediction of reliability related measures. Furthermore, it explores and highlights major issues involved in implementing RAM analysis in oil and gas industry and offers viable solutions. In this study, systematic analysis on each RAM components are proposed and their roles as strategic improvement and decision making tools are discussed and demonstrated using case studies of two plant systems. In reliability and maintainability (R&M) analysis, two main steps; exploratory and inferential are proposed. Tools such as Pareto, trend plot and hazard functions; Kaplan Meier (KM) and proportional hazard model (PHM), are used in exploratory phase to identify critical elements to system's R&M performances. In inferential analysis, a systematic methodology is presented to assess R&M related measures

    Uncertainty in Engineering

    Get PDF
    This open access book provides an introduction to uncertainty quantification in engineering. Starting with preliminaries on Bayesian statistics and Monte Carlo methods, followed by material on imprecise probabilities, it then focuses on reliability theory and simulation methods for complex systems. The final two chapters discuss various aspects of aerospace engineering, considering stochastic model updating from an imprecise Bayesian perspective, and uncertainty quantification for aerospace flight modelling. Written by experts in the subject, and based on lectures given at the Second Training School of the European Research and Training Network UTOPIAE (Uncertainty Treatment and Optimization in Aerospace Engineering), which took place at Durham University (United Kingdom) from 2 to 6 July 2018, the book offers an essential resource for students as well as scientists and practitioners
    corecore