10 research outputs found

    Modèles de fiabilité et de maintenance prédictive de systèmes sujets à des défaillances interactives

    Get PDF
    RÉSUMÉ: L’interaction des défaillances est une thématique qui prend une ampleur considérable dans le monde de la recherche industrielle moderne. Les systèmes sont de plus en plus complexes et leurs fonctionnements et défaillances sur le long terme sont sujets à diverses sources d’influence internes et externes. Les actifs physiques en particulier sont soumis à l’impact du temps, de l’environnement et du rythme de leur utilisation. Connaître ces sources d’influence n’est pas suffisant car il importe de comprendre quelles sont les relations qui les lient afin de planifier de façon efficiente la maintenance des actifs. En effet, cette dernière peut s’avérer très couteuse et sa mauvaise planification peut conduire à l’utilisation de systèmes dangereux pouvant engendrer des évènements catastrophiques. La fiabilité est un vaste domaine. Elle propose une large panoplie de modèles mathématiques qui permettent de prédire le fonctionnement et les défaillances des actifs physiques. Ceci dit, les concepts des modèles les plus appliqués à ce jour se basent sur des hypothèses parfois simplistes et occultent bien souvent certaines relations de dépendances qui régissent un système. L’interaction des défaillances dans le cadre des dépendances stochastiques est abordée par de nombreux travaux de recherches. Par contre, la compréhension et l’implémentation de ces travaux demeurent un défi pour les spécialistes en maintenance qui ont besoin de modèles réalistes pour une maintenance préventive efficace. Cette thèse traite de la fiabilité et la maintenance prédictive des actifs physiques en exploitation et sujets à divers modes de défaillance interactifs. Elle établit avant tout l’importance d’accorder une attention particulière à l’interaction des défaillances dans le domaine de la fiabilité et de la maintenance. Dans une revue de littérature, les concepts et les méthodes de modélisation et d’optimisation en fiabilité et en maintenance préventive sont présentés. Les divers types de dépendances dans un système sont discutés. Un cas d’application, à savoir celui des ponceaux en béton, est proposé. Les travaux entrepris par la suite fournissent avant tout un cadre pour la modélisation de la fiabilité incluant l’interaction des défaillances. A cette fin, une étude comparative des modèles existants les plus pertinents est effectuée de points de vue conceptuel, méthodologique et applicatif. Le cadre étant défini, un modèle basé sur les chocs extrêmes et les chaînes de Markov est construit afin de valoriser le caractère séquentiel des défaillances interactives. Cette proposition est améliorée pour prendre en compte la dégradation du système. Une stratégie de maintenance prédictive est conséquemment développée. Toutes ces approches sont appliquées à un ensemble de ponceaux en béton observés sur plusieurs années. Cela permet d’expliquer les dépendances entre l’occurrence de déplacements et l’occurrence de fissures dans une structure. Tous ces concepts et résultats sont finalement discutés afin de déterminer des perspectives réalistes pour une étude approfondie de l’interactivité d’un point de vue fiabiliste et dans un but stratégique pour la planification de la maintenance.----------ABSTRACT: Failure interaction is a subject gaining growing attention in the world of modern industrial research. Systems are becoming increasingly complex. Their life cycles are subject to various internal and external influences. Physical assets in particular are impacted by time, environment and usage. Knowing these sources of influence is not enough. Indeed, it is important to understand the relationships between them in order to plan effectively for the maintenance of assets. Maintenance can be quite expensive. Thus, poor planning can lead to dangerous systems that could cause catastrophic events. Reliability engineering offers a wide range of mathematical models to predict failures. That being said, the concepts of the most widely applied models in the industry are often based on simplistic assumptions and tend to overlook certain dependencies within a system. Failure interaction in the context of stochastic dependencies is largely addressed in the literature. However, understanding and implementing the proposed approaches remains a challenge for maintenance specialists that need realistic models for efficient maintenance planning. This thesis focuses on the reliability and predictive maintenance of physical assets subject to interactive failure modes. First of all, it emphasizes the importance of paying particular attention to failure interaction. In a literature review, the concepts and methods for modeling and optimizing reliability and preventive maintenance are presented. The diverse dependencies in a system are discussed. A case study is proposed, namely concrete culverts. Subsequently, the research provides a framework for modeling reliability that integrates the interaction of failures. To this end, the most relevant models in the literature are comparatively studied from a conceptual, methodological and applicative point of view. In the defined framework, a model based on extreme shocks and Markov processes is built in order to represent the sequential nature of interactive failures. This approach is extended to take into account the natural degradation of a system. A predictive maintenance strategy is consequently developed. All these models are applied to a set of concrete culverts observed over several years. The dependences between the occurrence of displacements and the occurrence of cracks in a structure are explained through these approaches. Finally, these concepts and results are discussed in order to determine realistic perspectives for in-depth studies of the impact of failure interaction on reliability and for strategic maintenance plannin

    New variance reduction methods in Monte Carlo rare event simulation

    Get PDF
    Para sistemas que proveen algún tipo de servicio mientras están operativos y dejan de proveerlo cuando fallan, es de interés determinar parámetros como, por ejemplo, la probabilidad de encontrar el sistema en falla en un instante cualquiera, el tiempo medio transcurrido entre fallas, o cualquier medida capaz de reflejar la capacidad del sistema para proveer servicio. Las determinaciones de estas medidas de seguridad de funcionamiento se ven afectadas por diversos factores, entre ellos, el tamaño del sistema y la rareza de las fallas. En esta tesis se estudian algunos métodos concebidos para determinar estas medidas sobre sistemas grandes y altamente confiables, es decir sistemas formados por gran cantidad de componentes, en los que las fallas del sistema son eventos raros. Ya sea en forma directa o indirecta, parte de las las expresiones que permiten determinar las medidas de interés corresponden a la probabilidad de que el sistema se encuentre en algún estado de falla. De un modo u otro, estas expresiones evaluan la fracción —ponderada por la distribución de probabilidad de las configuraciones del sistema—entre el número de configuraciones en las que el sistema falla y la totalidad de las configuraciones posibles. Si el sistema es grande el cálculo exacto de estas probabilidades, y consecuentemente de las medidas de interés, puede resultar inviable. Una solución alternativa es estimar estas probabilidades mediante simulación. Uno de los mecanismos para hacer estas estimaciones es la simulación de tipo Monte Carlo, cuya versión más simple es la simulación en crudo o estándar. El problema es que si las fallas son raras, el número de iteraciones necesario para estimar estas probabilidades mediante simulación estándar con una precisión aceptable, puede resultar desmesuradamente grande. En esta tesis se analizan algunos métodos existentes para mejorar la simulación estándar en el contexto de eventos raros, se hacen análisis de varianza y se prueban los métodos sobre una variedad de modelos. En todos los casos la mejora se consigue a costa de una reducción de la varianza del estimador con respecto a la varianza del estimador estándar. Gracias a la reducción de varianza es posible estimar la probabilidad de ocurrencia de eventos raros con una precisión aceptable, a partir de un número razonable de iteraciones. Como parte central del trabajo se proponen dos métodos nuevos, uno relacionado con Spliting y otro relacionado con Monte Carlo Condicional. Splitting es un método de probada eficiencia en entornos en los que se busca evaluar desempeño y confiabilidad combinados, escasamente utilizado en la simulación de sistemas altamente confiables sobre modelos estáticos (sin evolución temporal). En vi su formulación básica Splitting hace un seguimiento de las trayectorias de un proceso estocástico a través de su espacio de estados y multiplica su número ante cada cruce de umbral, para un conjunto dado de umbrales distribuidos entre los estados inicial y final. Una de las propuestas de esta tesis es una adaptación de Splitting a un modelo estático de confiabilidad de redes. En el método propuesto se construye un proceso estocástico a partir de un tiempo ficticio en el cual los enlaces van cambiando de estado y se aplica Splitting sobre ese proceso. El método exhibe elevados niveles de precisión y robustez. Monte Carlo Condicional es un método clásico de reducción de varianza cuyo uso no está muy extendido en el contexto de eventos raros. En su formulación básica Monte Carlo Condicional evalúa las probabilidades de los eventos de interés, condicionando las variables indicatrices a eventos no raros y simples de detectar. El problema es que parte de esa evaluación incluye el cálculo exacto de algunas probabilidades del modelo. Uno de los métodos propuestos en esta tesis es una adaptación de Monte Carlo Condicional al análisis de modelos Markovianos de sistemas altamente confiables. La propuesta consiste en estimar las probabilidades cuyo valor exacto se necesita, mediante una aplicación recursiva de Monte Carlo Condicional. Se estudian algunas características de este modelo y se verifica su eficiencia en forma experimental.For systems that provide some kind of service while they are operational and stop providing it when they fail, it is of interest to determine parameters such as, for example, the probability of finding the system failed at any moment, the mean time between failures, or any measure that reflects the capacity of the system to provide service. The determination of these measures —known as dependability measures— is affected by a variety of factors, including the size of the system and the rarity of failures. This thesis studies some methods designed to determine these measures on large and highly reliable systems, i.e. systems formed by a large number of components, such that systems’ failures are rare events. Either directly or indirectly, part of the expressions for determining the measures of interest correspond to the probability that the system is in some state of failure. Somehow, this expressions evaluate the ratio —weighted by the probability distribution of the systems’ configurations— between the number of configurations in which the system fails and all possible configurations. If the system is large, the exact calculation of these probabilities, and consequently of the measures of interest, may be unfeasible. An alternative solution is to estimate these probabilities by simulation. One mechanism to make such estimation is Monte Carlo simulation, whose simplest version is crude or standard simulation. The problem is that if failures are rare, the number of iterations required to estimate this probabilities by standard simulation, with acceptable accuracy, may be extremely large. In this thesis some existing methods to improve the standard simulation in the context of rare events are analyzed, some variance analyses are made and the methods are tested empirically over a variety of models. In all cases the improvement is achieved at the expense of reducing the variance of the estimator with respect to the standard estimator’s variance. Due to this variance reduction, the probability of the occurrence of rare events, with acceptable accuracy, can be achieved in a reasonable number of iterations. As a central part of this work, two new methods are proposed, one of them related to Splitting and the other one related to Conditional Monte Carlo. Splitting is a widely used method in performance and performability analysis, but scarcely applied for simulating highly reliable systems over static models (models with no temporal evolution). In its basic formulation Splitting keeps track of the trajectories of a stochastic process through its state space and it splits or multiplies the number of them at each threshold cross, for a given set of thresholds distributed between the initial and the final state. One of the proposals of this thesis is an adaptation of Splitting to a static network reliability model. In the proposed method, a fictitious time stochastic process in which the network links keep changing their state is built, and Splitting is applied to this process. The method shows to be highly accurate and robust. Conditional Monte Carlo is a classical variance reduction technique, whose use is not widespread in the field of rare events. In its basic formulation Conditional Monte Carlo evaluates the probabilities of the events of interest, conditioning the indicator variables to not rare and easy to detect events. The problem is that part of this assessment includes the exact calculation of some probabilities in the model. One of the methods proposed in this thesis is an adaptation of Conditional Monte Carlo to the analysis of highly reliable Markovian systems. The proposal consists in estimating the probabilities whose exact value is needed, by means of a recursive application of Conditional Monte Carlo. Some features of this model are discussed and its efficiency is verified experimentally

    Automated system design optimisation

    Get PDF
    The focus of this thesis is to develop a generic approach for solving reliability design optimisation problems which could be applicable to a diverse range of real engineering systems. The basic problem in optimal reliability design of a system is to explore the means of improving the system reliability within the bounds of available resources. Improving the reliability reduces the likelihood of system failure. The consequences of system failure can vary from minor inconvenience and cost to significant economic loss and personal injury. However any improvements made to the system are subject to the availability of resources, which are very often limited. The objective of the design optimisation problem analysed in this thesis is to minimise system unavailability (or unreliability if an unrepairable system is analysed) through the manipulation and assessment of all possible design alterations available, which are subject to constraints on resources and/or system performance requirements. This thesis describes a genetic algorithm-based technique developed to solve the optimisation problem. Since an explicit mathematical form can not be formulated to evaluate the objective function, the system unavailability (unreliability) is assessed using the fault tree method. Central to the optimisation algorithm are newly developed fault tree modification patterns (FTMPs). They are employed here to construct one fault tree representing all possible designs investigated, from the initial system design specified along with the design choices. This is then altered to represent the individual designs in question during the optimisation process. Failure probabilities for specified design cases are quantified by employing Binary Decision Diagrams (BDDs). A computer programme has been developed to automate the application of the optimisation approach to standard engineering safety systems. Its practicality is demonstrated through the consideration of two systems of increasing complexity; first a High Integrity Protection System (HIPS) followed by a Fire Water Deluge System (FWDS). The technique is then further-developed and applied to solve problems of multi-phased mission systems. Two systems are considered; first an unmanned aerial vehicle (UAV) and secondly a military vessel. The final part of this thesis focuses on continuing the development process by adapting the method to solve design optimisation problems for multiple multi-phased mission systems. Its application is demonstrated by considering an advanced UAV system involving multiple multi-phased flight missions. The applications discussed prove that the technique progressively developed in this thesis enables design optimisation problems to be solved for systems with different levels of complexity. A key contribution of this thesis is the development of a novel generic optimisation technique, embedding newly developed FTMPs, which is capable of optimising the reliability design for potentially any engineering system. Another key and novel contribution of this work is the capability to analyse and provide optimal design solutions for multiple multi-phase mission systems. Keywords: optimisation, system design, multi-phased mission system, reliability, genetic algorithm, fault tree, binary decision diagra

    The challenges affecting the reliability and maintainability of rolling stock operating in the Thabazimbi channel

    Get PDF
    Abstract: Rail transportation remains one of the cheapest and most effective modes of transportation in the Southern African Development Community. The proficiency and capability of the railway system, however, requires capital investment in infrastructure, transport system and, more importantly, rail infrastructure to support socio-economic growth and regional interlink across African nations. To accelerate socio-economic growth in the Southern African Development Community, there is a need for the African countries to work together. This is done with the intention of improving progress by minimising the cost of doing business through local integration and management...M.Phil. (Engineering Management

    Review of Health Prognostics and Condition Monitoring of Electronic Components

    Get PDF
    To meet the specifications of low cost, highly reliable electronic devices, fault diagnosis techniques play an essential role. It is vital to find flaws at an early stage in design, components, material, or manufacturing during the initial phase. This review paper attempts to summarize past development and recent advances in the areas about green manufacturing, maintenance, remaining useful life (RUL) prediction, and like. The current state of the art in reliability research for electronic components, mainly includes failure mechanisms, condition monitoring, and residual lifetime evaluation is explored. A critical analysis of reliability studies to identify their relative merits and usefulness of the outcome of these studies' vis-a-vis green manufacturing is presented. The wide array of statistical, empirical, and intelligent tools and techniques used in the literature are then identified and mapped. Finally, the findings are summarized, and the central research gap is highlighted

    Methods for the efficient measurement of phased mission system reliability and component importance

    Get PDF
    An increasing number of systems operate over a number of consecutive time periods, in which their reliability structure and the consequences of failure differ, in order to perform some overall operation. Each distinct time period is known as a phase and the overall operation is known as a phased mission. Generally, a phased mission fails immediately if the system fails at any point and is considered a success only if all phases are completed without failure. The work presented in this thesis provides efficient methods for the prediction and optimisation of phased mission reliability. A number of techniques and methods for the analysis of phased mission reliability have been previously developed. Due to the component and system failure time dependencies introduced by the phases, the computational expense of these methods is high and this limits the size of the systems that can be analysed in reasonable time frames on modern computers. Two importance measures, which provide an index of the influence of each component on the system reliability, have also been previously developed. This is useful for the optimisation of the reliability of a phased mission, however a much larger number have been developed for non-phased missions and the different perspectives and functions they provide are advantageous. This thesis introduces new methods as well as improvements and extensions to existing methods for the analysis of both non-repairable and repairable systems with an emphasis on improved efficiency in the derivation of phase and mission reliability. New importance measures for phased missions are also presented, including interpretations of those currently available for non-phased missions. These provide a number of interpretations of component importance, allowing those most suitable in a given context to be employed and thus aiding in the optimisation of mission reliability. In addition, an extensive computer code has been produced that implements and tests the majority of the newly developed techniques and methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Methods for the efficient measurement of phased mission system reliability and component importance

    Get PDF
    An increasing number of systems operate over a number of consecutive time periods, in which their reliability structure and the consequences of failure differ, in order to perform some overall operation. Each distinct time period is known as a phase and the overall operation is known as a phased mission. Generally, a phased mission fails immediately if the system fails at any point and is considered a success only if all phases are completed without failure. The work presented in this thesis provides efficient methods for the prediction and optimisation of phased mission reliability. A number of techniques and methods for the analysis of phased mission reliability have been previously developed. Due to the component and system failure time dependencies introduced by the phases, the computational expense of these methods is high and this limits the size of the systems that can be analysed in reasonable time frames on modern computers. Two importance measures, which provide an index of the influence of each component on the system reliability, have also been previously developed. This is useful for the optimisation of the reliability of a phased mission, however a much larger number have been developed for non-phased missions and the different perspectives and functions they provide are advantageous. This thesis introduces new methods as well as improvements and extensions to existing methods for the analysis of both non-repairable and repairable systems with an emphasis on improved efficiency in the derivation of phase and mission reliability. New importance measures for phased missions are also presented, including interpretations of those currently available for non-phased missions. These provide a number of interpretations of component importance, allowing those most suitable in a given context to be employed and thus aiding in the optimisation of mission reliability. In addition, an extensive computer code has been produced that implements and tests the majority of the newly developed techniques and methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    The resilience of asset systems to the operational risk of obsolescence: using fuzzy logic to quantify risk profiles

    Get PDF
    This thesis sets out to explore possible methodologies to enable proactive obsolescence management for end users within the Built Environment. Obsolescence has shown to be a growing operational and financial risk as technology is further embedded into our buildings, seeking enhanced performance and connectivity. Obsolescence directly impacts the supportability of an asset system, manifesting into obsolescence-driven investments, which are typically managed reactively causing lifecycle costing complications. Gaps within academic literature and industry guidance have been identified herein and will be directly addressed by the research questions. The challenge of researching into obsolescence surrounds the commercial value of the required datasets, requiring a novel methodology to address the research problem. Further to this, the multi-stakeholder nature of supply chains, along with the unknown nature of obsolescence, has created a level of ambiguity within the datasets. Fuzzy Logic was adopted, above other options, to create an Obsolescence Impact Tool (OIT) that would enable the user to quantify the risk profile of obsolescence within asset systems. This model, along with an enhanced Obsolescence Assessment Tool (OAT), were both developed and tested within a two-year case study environment. Additional research questions were answered by analysing the reverse engineered original equipment manufacturers (OEM) sales catalogues. Through the combination of both the results from OIT and OAT, along with the analysis of OEM catalogues, a visualisation of the resilience of asset systems in respect to obsolescence is presented. The findings found herein provide evidence for the use of OIT and OAT for industrial application through the insights provided by data-driven models. The two models formulate a methodology that enables decision-making and proactive obsolescence management under uncertainty. The results of the OEM analysis provide explicit evidence that can immediately be used by the reader to enhance their obsolescence management plan (OMP). Evidence of the impact of sales strategies and how an end-user could utilise and reverse engineer the findings, hold potential for all Facilities Management teams. The findings culminate in a wide range of contributions that further the understanding of obsolescence within the Built Environment and importantly bridge some of the existing gaps. The Future Works chapter covers both observations made by the author and alternative methodologies that would provide further insight i.e. Type 2 Fuzzy Sets, Adaptive Learning Techniques, and Markov Chains

    Proceedings / 6th International Symposium of Industrial Engineering - SIE 2015, 24th-25th September, 2015, Belgrade

    Get PDF
    editors Vesna Spasojević-Brkić, Mirjana Misita, Dragan D. Milanovi

    Proceedings / 6th International Symposium of Industrial Engineering - SIE 2015, 24th-25th September, 2015, Belgrade

    Get PDF
    editors Vesna Spasojević-Brkić, Mirjana Misita, Dragan D. Milanovi
    corecore