767 research outputs found

    Reliability of a k-out-of-n: G System Subjected to Marshall-Olkin Type Shocks Concerning Magnitude

    Get PDF
    In this paper the reliability of a k-out-of-n: G system under the effect of shocks having the Marshall-Olkin type shock models, is studied. The magnitudes of the shocks are considered. The system contains n components and only functions when at least k of these components function. The system is subjected to (n + 1) shocks coming from (n + 1) different sources. The shock coming from the it h source may destroy the it h component, i = 1, . . . , n, while the shock coming from the (n + 1)t h source may destroy all components simultaneously. A shock is fatal, destroys a component (components), whenever its magnitude exceeds an upper threshold. The system reliability is obtained by considering the arrival time and the magnitude of a shock as a bivariate random variable. It is assumed that the bivariate random variables representing the arrival times and the magnitudes of the shocks are independent with non-identical bivariate distributions. Since the computation of the reliability formula obtained is not easy to handle, an algorithm is introduced for calculating the reliability formula. The reliability of a k-out-of-n: G system subjected to independent and identical shocks is obtained as a special case, as well as the reliabilities of the series and the parallel systems. As an application, the bivariate exponential Gumbel distribution is considered. Also, numerical illustrations are performed to highlight the results obtained

    Reliability models for HPC applications and a Cloud economic model

    Get PDF
    With the enormous number of computing resources in HPC and Cloud systems, failures become a major concern. Therefore, failure behaviors such as reliability, failure rate, and mean time to failure need to be understood to manage such a large system efficiently. This dissertation makes three major contributions in HPC and Cloud studies. First, a reliability model with correlated failures in a k-node system for HPC applications is studied. This model is extended to improve accuracy by accounting for failure correlation. Marshall-Olkin Multivariate Weibull distribution is improved by excess life, conditional Weibull, to better estimate system reliability. Also, the univariate method is proposed for estimating Marshall-Olkin Multivariate Weibull parameters of a system composed of a large number of nodes. Then, failure rate, and mean time to failure are derived. The model is validated by using log data from Blue Gene/L system at LLNL. Results show that when failures of nodes in the system have correlation, the system becomes less reliable. Secondly, a reliability model of Cloud computing is proposed. The reliability model and mean time to failure and failure rate are estimated based on a system of k nodes and s virtual machines under four scenarios: 1) Hardware components fail independently, and software components fail independently; 2) software components fail independently, and hardware components are correlated in failure; 3) correlated software failure and independent hardware failure; and 4) dependent software and hardware failure. Results show that if the failure of the nodes and/or software in the system possesses a degree of dependency, the system becomes less reliable. Also, an increase in the number of computing components decreases the reliability of the system. Finally, an economic model for a Cloud service provider is proposed. This economic model aims at maximizing profit based on the right pricing and rightsizing in the Cloud data center. Total cost is a key element in the model and it is analyzed by considering the Total Cost of Ownership (TCO) of the Cloud

    Computational problems with binomial failure rate model and incomplete common cause failure reliability data

    Get PDF
    In estimating the reliability of a system of components, it is ordinarily assumed that the component lifetimes are independently distributed. This assumption usually alleviates the difficulty of analyzing complex systems, but it is seldom true that the failure of one component in an interactive system has no effect on the lifetimes of the other components. Often, two or more components will fail simultaneously due to a common cause event. Such an incident is called a common cause failure (CCF), and is now recognized as an important contribution to system failure in various applications of reliability. We examine current methods for reliability estimation of system and component lifetimes using estimators derived from the binomial failure rate model. Computational problems require a new approach, like iterative solutions via the EM algorithm

    Inequalities in multivariate analysis and reliability theory

    Get PDF
    Issued as Progress report, and Final report, Project no. G-37-63

    A Semi-Analytical Parametric Model for Dependent Defaults

    Get PDF
    A semi-analytical parametric approach to modeling default dependency is presented. It is a multi-factor model based on instantaneous default correlation that also takes into account higher order default correlations. It is capable of accommodating a term structure of default correlations and has a dynamic formulation in the form of a continuous time Markov chain. With two factors and a constant hazard rate, it provides perfect fits to four tranches of CDX.NA.IG and iTraxx Europe CDOs of 5, 7 and 10 year maturities. With time dependent hazard rates, it provides perfect fits to all the five tranches for all three maturities.Default Risk; Default Correlation; CDO; Markov Chain; Semi-analytical; Parametric

    A new approach to measure systemic risk:A bivariate copula model for dependent censored data

    Get PDF
    We propose a novel approach based on the Marshall-Olkin (MO) copula to estimate the impact of systematic and idiosyncratic components on cross-border systemic risk. To use the data on non-failed banks in the suggested method, we consider the time to bank failure as a censored variable. Therefore, we propose a pseudo-maximum likelihood estimation procedure for the MO copula for a Type I censored sample. We derive the log-likelihood function, the copula parameter estimator and the bootstrap confidence intervals. Empirical data on the banking system of three European countries (Germany, Italy and the UK) shows that the proposed censored model can accurately estimate the systematic component of cross-border systemic risk. (C) 2019 Elsevier B.V. All rights reserved

    Selected topics in financial engineering: first-exit times and dependence structures of Marshall-Olkin Kind

    Get PDF
    146 p.En esta tesis hemos investigado los tiempos de parada en diferentes ámbitos de las matemáticas financieras. Por una parte, hemos implementado una técnica de Montecarlo precisa, técnica del Puente Browniano, que estima las probabilidades de tiempos de parada de un proceso estocástico de difusión con saltos, considerando el tamaño de los saltos aleatorio y dos barreras constantes entre las cuales se mueve el proceso de difusión. Por otra parte, hemos analizado la probabilidad de distribución de la suma de los tiempos de default, dependientes entre sí, mediante la ley de probabilidad de Marshall¿Olkin. La distribución de Marshall¿Olkin es crucial en el ámbitos de la relatividad y en las aplicaciones de life-testing. Hemos derivado expresiones cerradas para la suma de los tiempos de default en el caso general bivariante y para dimensiones pequeñas considerando la familia intercambiable de la distribución de Marshall¿Olkin. Cuando la dimensión de la suma de los tiempos de default tiende a infinito, hemos demostrado que esta media converge al funcional exponencial del subordinador de Lévy. Finalmente, hemos investigado diferentes técnicas numéricas para simular las cópulas de Lévy-frailty construidas a partir de un subordinador ¿-estable de Lévy. La posibilidad de simular estas cópulas de forma precisa y rápida nos permite calcular numéricamente y de manera eficiente, el funcional exponencial del subordinador ¿-estable de Lévy
    corecore