188 research outputs found

    Order-statistics-based inferences for censored lifetime data and financial risk analysis

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis focuses on applying order-statistics-based inferences on lifetime analysis and financial risk measurement. The first problem is raised from fitting the Weibull distribution to progressively censored and accelerated life-test data. A new orderstatistics- based inference is proposed for both parameter and con dence interval estimation. The second problem can be summarised as adopting the inference used in the first problem for fitting the generalised Pareto distribution, especially when sample size is small. With some modifications, the proposed inference is compared with classical methods and several relatively new methods emerged from recent literature. The third problem studies a distribution free approach for forecasting financial volatility, which is essentially the standard deviation of financial returns. Classical models of this approach use the interval between two symmetric extreme quantiles of the return distribution as a proxy of volatility. Two new models are proposed, which use intervals of expected shortfalls and expectiles, instead of interval of quantiles. Different models are compared with empirical stock indices data. Finally, attentions are drawn towards the heteroskedasticity quantile regression. The proposed joint modelling approach, which makes use of the parametric link between the quantile regression and the asymmetric Laplace distribution, can provide estimations of the regression quantile and of the log linear heteroskedastic scale simultaneously. Furthermore, the use of the expectation of the check function as a measure of quantile deviation is discussed

    Vol. 13, No. 1 (Full Issue)

    Get PDF

    Inference About The Generalized Exponential Quantiles Based On Progressively Type-Ii Censored Data

    Get PDF
    In this study, we are interested in investigating the performance of likelihood inference procedures for the ℎ quantile of the Generalized Exponential distribution based on progressively censored data. The maximum likelihood estimator and three types of classical confidence intervals have been considered, namely asymptotic, percentile, and bootstrap-t confidence intervals. We considered Bayesian inference too. The Bayes estimator based on the squared error loss function and two types of Bayesian intervals were considered, namely the equal tailed interval and the highest posterior density interval. We conducted simulation studies to investigate and compare the point estimators in terms of their biases and mean squared errors. We compared the various types of intervals using their coverage probability and expected lengths. The simulations and comparisons were made under various types of censoring schemes and sample sizes. We presented two examples for data analysis, one of them is based on simulated data set and the other one based on a real lifetime data. Finally, we compared the classical inference and the Bayesian inference procedures. We concluded that Bias and MSE for classical statistics estimators show bitter results than the Bayesian estimators. Also, Bayesian intervals which attain the nominal error rate have the best average widths. We presented our conclusions and discussed ideas for possible future research

    On estimating the reliability in a multicomponent system based on progressively-censored data from Chen distribution

    Get PDF
    This research deals with classical, Bayesian, and generalized estimation of stress-strength reliability parameter, Rs;k = Pr(at least s of (X1;X2; :::;Xk) exceed Y) = Pr(Xks+1:k \u3eY) of an s-out-of-k : G multicomponent system, based on progressively type-II right-censored samples with random removals when stress and strength are two independent Chen random variables. Under squared-error and LINEX loss functions, Bayes estimates are developed by using Lindley’s approximation and Markov Chain Monte Carlo method. Generalized estimates are developed using generalized variable method while classical estimates - the maximum likelihood estimators, their asymptotic distributions, asymptotic confidence intervals, bootstrap-based confidence intervals - are also developed. A simulation study and a real-world data analysis are provided to illustrate the proposed procedures. The size of the test, adjusted and unadjusted power of the test, coverage probability and expected lengths of the confidence intervals, and biases of the estimators are also computed, compared and contrasted

    START: Straggler Prediction and Mitigation for Cloud Computing Environments using Encoder LSTM Networks

    Get PDF
    A common performance problem in large-scale cloud systems is dealing with straggler tasks that are slow running instances which increase the overall response time. Such tasks impact the system's QoS and the SLA. There is a need for automatic straggler detection and mitigation mechanisms that execute jobs without violating the SLA. Prior work typically builds reactive models that focus first on detection and then mitigation of straggler tasks, which leads to delays. Other works use prediction based proactive mechanisms, but ignore volatile task characteristics. We propose a Straggler Prediction and Mitigation Technique (START) that is able to predict which tasks might be stragglers and dynamically adapt scheduling to achieve lower response times. START analyzes all tasks and hosts based on compute and network resource consumption using an Encoder LSTM network to predict and mitigate expected straggler tasks. This reduces the SLA violation rate and execution time without compromising QoS. Specifically, we use the CloudSim toolkit to simulate START and compare it with IGRU-SD, SGC, Dolly, GRASS, NearestFit and Wrangler in terms of QoS parameters. Experiments show that START reduces execution time, resource contention, energy and SLA violations by 13%, 11%, 16%, 19%, compared to the state-of-the-art

    Optimal Experimental Planning of Reliability Experiments Based on Coherent Systems

    Get PDF
    In industrial engineering and manufacturing, assessing the reliability of a product or system is an important topic. Life-testing and reliability experiments are commonly used reliability assessment methods to gain sound knowledge about product or system lifetime distributions. Usually, a sample of items of interest is subjected to stresses and environmental conditions that characterize the normal operating conditions. During the life-test, successive times to failure are recorded and lifetime data are collected. Life-testing is useful in many industrial environments, including the automobile, materials, telecommunications, and electronics industries. There are different kinds of life-testing experiments that can be applied for different purposes. For instance, accelerated life tests (ALTs) and censored life tests are commonly used to acquire information in reliability and life-testing experiments with the presence of time and resource limitations. Statistical inference based on the data obtained from a life test and effectively planning a life-testing experiment subject to some constraints are two important problems statisticians are interested in. The experimental design problem for a life test has long been studied; however, the experimental planning considering putting the experimental units into systems for a life-test has not been studied. In this thesis, we study the optimal experimental planning problem in multiple stress levels life-testing experiments and progressively Type-II censored life-testing experiments when the test units can be put into coherent systems for the experiment. Based on the notion of system signature, a tool in structure reliability to represent the structure of a coherent system, under different experimental settings, models and assumptions, we derive the maximum likelihood estimators of the model parameters and the expected Fisher information matrix. Then, we use the expected Fisher information matrix to obtain the asymptotic variance-covariance matrix of the maximum likelihood estimators when nn-component coherent systems are used in the life-testing experiment. Based on different optimality criteria, such as DD-optimality, AA-optimality and VV-optimality, we obtain the optimal experimental plans under different settings. Numerical and Monte Carlo simulation studies are used to demonstrate the advantages and disadvantages of using systems in life-testing experiments

    On the reliability of Type II censored reliability analyses.

    Get PDF
    This thesis considers the analysis of reliability data subject to censoring, and, in particular, the extent to which an interim analysis - here, using information based on Type II censoring - provides a guide to the final analysis. Under a Type II censored sampling, a random sample of n units is put on test simultaneously, and the test is terminated as soon as r (1 &le; r &le; n, although we are usually interested in r < n) failures are observed. In the case where all test units were observed to fail (r = n), the sample is complete. From a statistical perspective, the analysis of the complete sample is to be preferred, but, in practice, censoring is often necessary; such sampling plan can save money and time, since it could take a very long time for all units to fail in some instances. From a practical perspective, an experimenter may be interested to know the smallest number of failures at which the experiment can be reasonably or safely terminated with the interim analysis still providing a close and reliable guide to the analysis of the final, complete data. In this thesis, we aim to gain more insight into the roles of censoring number r and sample size n under this sampling plan. Our approach requires a method to measure the precision of a Type II censored estimate, calculated at censoring level r, in estimating the complete estimate, and hence the study of the relationship between interim and final estimates. For simplicity, we assume that the lifetimes follow the exponential distribution, and then adopt the methods to the two- parameter Weibull and Burr Type XII distributions, both are widely used in reliability modelling. We start by presenting some mathematical and computational methodology for estimating model parameters and percentile functions, by the method of maximum likelihood. Expressions for the asymptotic variances and covariances of the estimators are given. In practice, some indication of the likely accuracy of these estimates is often desired; the theory of asymptotic Normality of maximum likelihood estimator is convenient, however, we consider the use of relative likelihood contour plots to obtain approximate confidence regions of parameters in relatively small samples. Finally, we provide formulae of the correlations between the interim and final maximum likelihood estimators of model parameters and a particular percentile function, and discuss some practical implications of our work, based on the results obtained from published data and simulation experiments

    Reliability applied to maintenance

    Get PDF
    The thesis covers studies conducted during 1976-79 under a Science Research Council contract to examine the uses of reliability information in decision-making in maintenance in the process industries. After a discussion of the ideal data system, four practical studies of process plants are described involving both Pareto and distribution analysis. In two of these studies the maintenance policy was changed and the effect on failure modes and frequency observed. Hyper-exponentially distributed failure intervals were found to be common and were explained after observation of maintenance work practices and development of theory as being due to poor workmanship and parts. The fallacy that constant failure rate necessarily implies the optimality of maintenance only at failure is discussed. Two models for the optimisation of inspection intervals are developed; both assume items give detectable warning of impending failure. The first is based upon constant risk of failure between successive inspections 'and Weibull base failure distribution~ Results show that an inspection/on-condition maintenance regime can be cost effective even when the failure rate is falling and may be better than periodiC renewals for an increasing failure situation. The second model is first-order Markov. Transition rate matrices are developed and solved to compare continuous monitoring with inspections/on-condition maintenance an a cost basis. The models incorporate planning delay in starting maintenance after impending failure is detected. The relationships between plant output and maintenance policy as affected by the presence of redundancy and/or storage between stages are examined, mainly through the literature but with some original theoretical proposals. It is concluded that reliability techniques have many applications in the improvement of plant maintenance policy. Techniques abound, but few firms are willing to take the step of faith to set up, even temporarily, the data-collection facilities required to apply them. There are over 350 references, many of which are reviewed in the text, divided into chapter-related sectionso Appendices include a review of Reliability Engineering Theory, based on the author's draft for BS 5760(2) a discussion of the 'bath-tub curves' applicability to maintained systems and the theory connecting hyper-exponentially distributed failures with poor maintenance practices
    corecore