2,519 research outputs found

    Evaluation of Two Terminal Reliability of Fault-tolerant Multistage Interconnection Networks

    Get PDF
    This paper iOntroduces a new method based on multi-decomposition for predicting the two terminal reliability of fault-tolerant multistage interconnection networks. The method is well supported by an efficient algorithm which runs polynomially. The method is well illustrated by taking a network consists of eight nodes and twelve links as an example. The proposed method is found to be simple, general and efficient and thus is as such applicable to all types of fault-tolerant multistage interconnection networks. The results show this method provides a greater accurate probability when applied on fault-tolerant multistage interconnection networks. Reliability of two important MINs are evaluated by using the proposed method

    Optimal system reliability design of consecutive-k-out-of-n systems

    Get PDF
    This research studies four special types of systems: k-out-of-n:F systems, k-out-of-n:G systems, consecutive-k-out-of-n:F systems, and consecutive-k-out-of-n:G systems. A k-out-of-n:F system fails if and only if at least k of its n components fail. A k-out-of-n:G system is good if and only if at least k of its n components are good. A consecutive-k-out-of-n:F system is a sequence of n ordered components such that the system works if and only if less than k consecutive components fail. A consecutive-k-out-of-n:G system consists of an ordered sequence of n components such that the system works if and only if at least k consecutive components work. The consecutive-k-out-of-n systems are further divided into linear systems and circular systems corresponding to the cases where the components are ordered along a line and a circle, respectively;After the reliability evaluation of the k-out-of-n systems and the reliability evaluation and optimal design of the consecutive-k-out-of-n systems are reviewed. The properties of these systems are further investigated. Next, this research concentrates on the optimal design of the consecutive-k-out-of-n systems. An arrangement of components is optimal if it maximizes the system\u27s reliability. An optimal arrangement is invariant if it depends only upon the ordering of component reliabilities but not their actual values. Theorems are developed to identify invariant optimal designs of some consecutive systems. Other theorems are provided proving that there are no invariant optimal configurations for some consecutive systems. For those systems where invariant optimal designs do not exist, a heuristic method is provided to find at least suboptimal solutions. Two case studies are presented to show the applications of the theoretical results developed in this study

    Polarization and depolarization of monomial ideals with application to multi-state system reliability

    Get PDF
    Polarization is a powerful technique in algebra which provides combinatorial tools to study algebraic invariants of monomial ideals. We study the reverse of this process, depolarization which leads to a family of ideals which share many common features with the original ideal. Given a squarefree monomial ideal, we describe a combinatorial method to obtain all its depolarizations, and we highlight their similar properties such as the graded Betti numbers. We show that even though they have many similar properties, their differences in dimension make them distinguishable in applications in system reliability theory. In particular, we apply polarization and depolarization tools to study the reliability of multistate coherent systems via binary systems and vice versa. We use depolarization as a tool to reduce the dimension and the number of variables in coherent systems

    Data validation and reliability calculations in digital protection systems

    Get PDF
    Imperial Users onl

    Algebraic algorithms for the reliability analysis of multi-state k-out-of-n systems

    Get PDF
    We develop algorithms for the analysis of multi-state k-out-of-n systems and their reliability based on commutative algebra

    New variance reduction methods in Monte Carlo rare event simulation

    Get PDF
    Para sistemas que proveen algún tipo de servicio mientras están operativos y dejan de proveerlo cuando fallan, es de interés determinar parámetros como, por ejemplo, la probabilidad de encontrar el sistema en falla en un instante cualquiera, el tiempo medio transcurrido entre fallas, o cualquier medida capaz de reflejar la capacidad del sistema para proveer servicio. Las determinaciones de estas medidas de seguridad de funcionamiento se ven afectadas por diversos factores, entre ellos, el tamaño del sistema y la rareza de las fallas. En esta tesis se estudian algunos métodos concebidos para determinar estas medidas sobre sistemas grandes y altamente confiables, es decir sistemas formados por gran cantidad de componentes, en los que las fallas del sistema son eventos raros. Ya sea en forma directa o indirecta, parte de las las expresiones que permiten determinar las medidas de interés corresponden a la probabilidad de que el sistema se encuentre en algún estado de falla. De un modo u otro, estas expresiones evaluan la fracción —ponderada por la distribución de probabilidad de las configuraciones del sistema—entre el número de configuraciones en las que el sistema falla y la totalidad de las configuraciones posibles. Si el sistema es grande el cálculo exacto de estas probabilidades, y consecuentemente de las medidas de interés, puede resultar inviable. Una solución alternativa es estimar estas probabilidades mediante simulación. Uno de los mecanismos para hacer estas estimaciones es la simulación de tipo Monte Carlo, cuya versión más simple es la simulación en crudo o estándar. El problema es que si las fallas son raras, el número de iteraciones necesario para estimar estas probabilidades mediante simulación estándar con una precisión aceptable, puede resultar desmesuradamente grande. En esta tesis se analizan algunos métodos existentes para mejorar la simulación estándar en el contexto de eventos raros, se hacen análisis de varianza y se prueban los métodos sobre una variedad de modelos. En todos los casos la mejora se consigue a costa de una reducción de la varianza del estimador con respecto a la varianza del estimador estándar. Gracias a la reducción de varianza es posible estimar la probabilidad de ocurrencia de eventos raros con una precisión aceptable, a partir de un número razonable de iteraciones. Como parte central del trabajo se proponen dos métodos nuevos, uno relacionado con Spliting y otro relacionado con Monte Carlo Condicional. Splitting es un método de probada eficiencia en entornos en los que se busca evaluar desempeño y confiabilidad combinados, escasamente utilizado en la simulación de sistemas altamente confiables sobre modelos estáticos (sin evolución temporal). En vi su formulación básica Splitting hace un seguimiento de las trayectorias de un proceso estocástico a través de su espacio de estados y multiplica su número ante cada cruce de umbral, para un conjunto dado de umbrales distribuidos entre los estados inicial y final. Una de las propuestas de esta tesis es una adaptación de Splitting a un modelo estático de confiabilidad de redes. En el método propuesto se construye un proceso estocástico a partir de un tiempo ficticio en el cual los enlaces van cambiando de estado y se aplica Splitting sobre ese proceso. El método exhibe elevados niveles de precisión y robustez. Monte Carlo Condicional es un método clásico de reducción de varianza cuyo uso no está muy extendido en el contexto de eventos raros. En su formulación básica Monte Carlo Condicional evalúa las probabilidades de los eventos de interés, condicionando las variables indicatrices a eventos no raros y simples de detectar. El problema es que parte de esa evaluación incluye el cálculo exacto de algunas probabilidades del modelo. Uno de los métodos propuestos en esta tesis es una adaptación de Monte Carlo Condicional al análisis de modelos Markovianos de sistemas altamente confiables. La propuesta consiste en estimar las probabilidades cuyo valor exacto se necesita, mediante una aplicación recursiva de Monte Carlo Condicional. Se estudian algunas características de este modelo y se verifica su eficiencia en forma experimental.For systems that provide some kind of service while they are operational and stop providing it when they fail, it is of interest to determine parameters such as, for example, the probability of finding the system failed at any moment, the mean time between failures, or any measure that reflects the capacity of the system to provide service. The determination of these measures —known as dependability measures— is affected by a variety of factors, including the size of the system and the rarity of failures. This thesis studies some methods designed to determine these measures on large and highly reliable systems, i.e. systems formed by a large number of components, such that systems’ failures are rare events. Either directly or indirectly, part of the expressions for determining the measures of interest correspond to the probability that the system is in some state of failure. Somehow, this expressions evaluate the ratio —weighted by the probability distribution of the systems’ configurations— between the number of configurations in which the system fails and all possible configurations. If the system is large, the exact calculation of these probabilities, and consequently of the measures of interest, may be unfeasible. An alternative solution is to estimate these probabilities by simulation. One mechanism to make such estimation is Monte Carlo simulation, whose simplest version is crude or standard simulation. The problem is that if failures are rare, the number of iterations required to estimate this probabilities by standard simulation, with acceptable accuracy, may be extremely large. In this thesis some existing methods to improve the standard simulation in the context of rare events are analyzed, some variance analyses are made and the methods are tested empirically over a variety of models. In all cases the improvement is achieved at the expense of reducing the variance of the estimator with respect to the standard estimator’s variance. Due to this variance reduction, the probability of the occurrence of rare events, with acceptable accuracy, can be achieved in a reasonable number of iterations. As a central part of this work, two new methods are proposed, one of them related to Splitting and the other one related to Conditional Monte Carlo. Splitting is a widely used method in performance and performability analysis, but scarcely applied for simulating highly reliable systems over static models (models with no temporal evolution). In its basic formulation Splitting keeps track of the trajectories of a stochastic process through its state space and it splits or multiplies the number of them at each threshold cross, for a given set of thresholds distributed between the initial and the final state. One of the proposals of this thesis is an adaptation of Splitting to a static network reliability model. In the proposed method, a fictitious time stochastic process in which the network links keep changing their state is built, and Splitting is applied to this process. The method shows to be highly accurate and robust. Conditional Monte Carlo is a classical variance reduction technique, whose use is not widespread in the field of rare events. In its basic formulation Conditional Monte Carlo evaluates the probabilities of the events of interest, conditioning the indicator variables to not rare and easy to detect events. The problem is that part of this assessment includes the exact calculation of some probabilities in the model. One of the methods proposed in this thesis is an adaptation of Conditional Monte Carlo to the analysis of highly reliable Markovian systems. The proposal consists in estimating the probabilities whose exact value is needed, by means of a recursive application of Conditional Monte Carlo. Some features of this model are discussed and its efficiency is verified experimentally

    Phased mission modelling using fault tree analysis

    Get PDF
    Many types of system operate for missions which are made up of several phases. For the complete mission to be a success, the system must operate successfully during each of the phases. Examples of such systems include an aircraft flight, and also many military operations for both aircraft and ships. An aircraft mission could be considered as the following phases: taxiing to the runway, takeoff, climbing to the correct altitude, cruising, descending, landing and taxiing back to the terminal. Component failures can occur at any point during the mission but their condition may only be critical for one particular phase. As such it may be that the transition from one phase to another is the critical event leading to mission failure, the component failures resulting in the system failure may have occurred during some previous phase. This paper describes a means of analysing the reliability of non-repairable systems which undergo phased missions. Fault Tree Analysis has been used as the method to assess the system performance. The results of the analysis are the system failure modes in each phase (minimal cut sets), the failure probability in each phase and the total mission unreliability. To increase the efficiency of the analysis the fault trees constructed to represent the system failure logic are analysed using a modularisation method. Binary Decision Diagrams (BDD’s) are then employed to quantify the likelihood of failure in each phase

    GRASP/VND Optimization Algorithms for Hard Combinatorial Problems

    Get PDF
    Two hard combinatorial problems are addressed in this thesis. The first one is known as the ”Max CutClique”, a combinatorial problem introduced by P. Martins in 2012. Given a simple graph, the goal is to find a clique C such that the number of links shared between C and its complement C C is maximum. In a first contribution, a GRASP/VND methodology is proposed to tackle the problem. In a second one, the N P-Completeness of the problem is mathematically proved. Finally, a further generalization with weighted links is formally presented with a mathematical programming formulation, and the previous GRASP is adapted to the new problem. The second problem under study is a celebrated optimization problem coming from network reliability analysis. We assume a graph G with perfect nodes and imperfect links, that fail independently with identical probability ρ ∈ [0,1]. The reliability RG(ρ), is the probability that the resulting subgraph has some spanning tree. Given a number of nodes and links, p and q, the goal is to find the (p,q)-graph that has the maximum reliability RG(ρ), uniformly in the compact set ρ ∈ [0,1]. In a first contribution, we exploit properties shared by all uniformly most-reliable graphs such as maximum connectivity and maximum Kirchhoff number, in order to build a novel GRASP/VND methodology. Our proposal finds the globally optimum solution under small cases, and it returns novel candidates of uniformly most-reliable graphs, such as Kantor-Mobius and Heawood graphs. We also offer a literature review, ¨ and a mathematical proof that the bipartite graph K4,4 is uniformly most-reliable. Finally, an abstract mathematical model of Stochastic Binary Systems (SBS) is also studied. It is a further generalization of network reliability models, where failures are modelled by a general logical function. A geometrical approximation of a logical function is offered, as well as a novel method to find reliability bounds for general SBS. This bounding method combines an algebraic duality, Markov inequality and Hahn-Banach separation theorem between convex and compact sets
    corecore