1,876 research outputs found

    The survival signature for quantifying system reliability: an introductory overview from practical perspective

    Get PDF
    The structure function describes the functioning of a system dependent on the states of its components, and is central to theory of system reliability. The survival signature is a summary of the structure function which is sufficient to derive the system’s reliability function. Since its introduction in 2012, the survival signature has received much attention in the literature, with developments on theory, computation and generalizations. This paper presents an introductory overview of the survival signature, including some recent developments. We discuss challenges for practical use of survival signatures for large systems

    The joint survival signature of coherent systems with shared components

    Get PDF
    The concept of joint bivariate signature, introduced by Navarro et al. [13], is a useful tool for quantifying the reliability of two systems with shared components. As with the univariate system signature, introduced by Samaniego [17], its applications are limited to systems with only one type of components, which restricts its practical use. Coolen and Coolen-Maturi [2] introduced the survival signature, which generalizes Samaniego’s signature and can be used for systems with multiple types of components. This paper introduces a joint survival signature for multiple systems with multiple types of components and with some components shared between systems. A particularly important feature is that the functioning of these systems can be considered at different times, enabling computation of relevant conditional probabilities with regard to a system’s functioning conditional on the status of another system with which it shares components. Several opportunities for practical application and related challenges for further development of the presented concept are briefly discussed, setting out an important direction for future research

    Multi-objective System Design Optimization via PPA and a Fuzzy Method

    Get PDF
    System design deals with various challenges of targets and resources, such as reliability, availability, maintainability, cost, weight, volume, and configuration. This paper deals with the multi-objective system availability and cost optimization of parallel–series systems by resorting to the multi-objective strawberry algorithm also known as the Plant Propagation Algorithm or PPA and a fuzzy method. It is the first implementation of this optimization algorithm in the literature for this kind of problem to generate the Pareto Front. The fuzzy method allows helping the decision maker to select the best compromise solution. A numerical case study involving 10 subsystems highlights the applicability of the proposed approach

    Predictive inference for system reliability after common-cause component failures

    Get PDF
    This paper presents nonparametric predictive inference for system reliability following common-cause failures of components. It is assumed that a single failure event may lead to simultaneous failure of multiple components. Data consist of frequencies of such events involving particular numbers of components. These data are used to predict the number of components that will fail at the next failure event. The effect of failure of one or more components on the system reliability is taken into account through the system׳s survival signature. The predictive performance of the approach, in which uncertainty is quantified using lower and upper probabilities, is analysed with the use of ROC curves. While this approach is presented for a basic scenario of a system consisting of only a single type of components and without consideration of failure behaviour over time, it provides many opportunities for more general modelling and inference, these are briefly discussed together with the related research challenges

    Efficient resilience analysis and decision-making for complex engineering systems

    Get PDF
    Modern societies around the world are increasingly dependent on the smooth functionality of progressively more complex systems, such as infrastructure systems, digital systems like the internet, and sophisticated machinery. They form the cornerstones of our technologically advanced world and their efficiency is directly related to our well-being and the progress of society. However, these important systems are constantly exposed to a wide range of threats of natural, technological, and anthropogenic origin. The emergence of global crises such as the COVID-19 pandemic and the ongoing threat of climate change have starkly illustrated the vulnerability of these widely ramified and interdependent systems, as well as the impossibility of predicting threats entirely. The pandemic, with its widespread and unexpected impacts, demonstrated how an external shock can bring even the most advanced systems to a standstill, while the ongoing climate change continues to produce unprecedented risks to system stability and performance. These global crises underscore the need for systems that can not only withstand disruptions, but also, recover from them efficiently and rapidly. The concept of resilience and related developments encompass these requirements: analyzing, balancing, and optimizing the reliability, robustness, redundancy, adaptability, and recoverability of systems -- from both technical and economic perspectives. This cumulative dissertation, therefore, focuses on developing comprehensive and efficient tools for resilience-based analysis and decision-making of complex engineering systems. The newly developed resilience decision-making procedure is at the core of these developments. It is based on an adapted systemic risk measure, a time-dependent probabilistic resilience metric, as well as a grid search algorithm, and represents a significant innovation as it enables decision-makers to identify an optimal balance between different types of resilience-enhancing measures, taking into account monetary aspects. Increasingly, system components have significant inherent complexity, requiring them to be modeled as systems themselves. Thus, this leads to systems-of-systems with a high degree of complexity. To address this challenge, a novel methodology is derived by extending the previously introduced resilience framework to multidimensional use cases and synergistically merging it with an established concept from reliability theory, the survival signature. The new approach combines the advantages of both original components: a direct comparison of different resilience-enhancing measures from a multidimensional search space leading to an optimal trade-off in terms of system resilience, and a significant reduction in computational effort due to the separation property of the survival signature. It enables that once a subsystem structure has been computed -- a typically computational expensive process -- any characterization of the probabilistic failure behavior of components can be validated without having to recompute the structure. In reality, measurements, expert knowledge, and other sources of information are loaded with multiple uncertainties. For this purpose, an efficient method based on the combination of survival signature, fuzzy probability theory, and non-intrusive stochastic simulation (NISS) is proposed. This results in an efficient approach to quantify the reliability of complex systems, taking into account the entire uncertainty spectrum. The new approach, which synergizes the advantageous properties of its original components, achieves a significant decrease in computational effort due to the separation property of the survival signature. In addition, it attains a dramatic reduction in sample size due to the adapted NISS method: only a single stochastic simulation is required to account for uncertainties. The novel methodology not only represents an innovation in the field of reliability analysis, but can also be integrated into the resilience framework. For a resilience analysis of existing systems, the consideration of continuous component functionality is essential. This is addressed in a further novel development. By introducing the continuous survival function and the concept of the Diagonal Approximated Signature as a corresponding surrogate model, the existing resilience framework can be usefully extended without compromising its fundamental advantages. In the context of the regeneration of complex capital goods, a comprehensive analytical framework is presented to demonstrate the transferability and applicability of all developed methods to complex systems of any type. The framework integrates the previously developed resilience, reliability, and uncertainty analysis methods. It provides decision-makers with the basis for identifying resilient regeneration paths in two ways: first, in terms of regeneration paths with inherent resilience, and second, regeneration paths that lead to maximum system resilience, taking into account technical and monetary factors affecting the complex capital good under analysis. In summary, this dissertation offers innovative contributions to efficient resilience analysis and decision-making for complex engineering systems. It presents universally applicable methods and frameworks that are flexible enough to consider system types and performance measures of any kind. This is demonstrated in numerous case studies ranging from arbitrary flow networks, functional models of axial compressors to substructured infrastructure systems with several thousand individual components.Moderne Gesellschaften sind weltweit zunehmend von der reibungslosen Funktionalität immer komplexer werdender Systeme, wie beispielsweise Infrastruktursysteme, digitale Systeme wie das Internet oder hochentwickelten Maschinen, abhängig. Sie bilden die Eckpfeiler unserer technologisch fortgeschrittenen Welt, und ihre Effizienz steht in direktem Zusammenhang mit unserem Wohlbefinden sowie dem Fortschritt der Gesellschaft. Diese wichtigen Systeme sind jedoch einer ständigen und breiten Palette von Bedrohungen natürlichen, technischen und anthropogenen Ursprungs ausgesetzt. Das Auftreten globaler Krisen wie die COVID-19-Pandemie und die anhaltende Bedrohung durch den Klimawandel haben die Anfälligkeit der weit verzweigten und voneinander abhängigen Systeme sowie die Unmöglichkeit einer Gefahrenvorhersage in voller Gänze eindrücklich verdeutlicht. Die Pandemie mit ihren weitreichenden und unerwarteten Auswirkungen hat gezeigt, wie ein externer Schock selbst die fortschrittlichsten Systeme zum Stillstand bringen kann, während der anhaltende Klimawandel immer wieder beispiellose Risiken für die Systemstabilität und -leistung hervorbringt. Diese globalen Krisen unterstreichen den Bedarf an Systemen, die nicht nur Störungen standhalten, sondern sich auch schnell und effizient von ihnen erholen können. Das Konzept der Resilienz und die damit verbundenen Entwicklungen umfassen diese Anforderungen: Analyse, Abwägung und Optimierung der Zuverlässigkeit, Robustheit, Redundanz, Anpassungsfähigkeit und Wiederherstellbarkeit von Systemen -- sowohl aus technischer als auch aus wirtschaftlicher Sicht. In dieser kumulativen Dissertation steht daher die Entwicklung umfassender und effizienter Instrumente für die Resilienz-basierte Analyse und Entscheidungsfindung von komplexen Systemen im Mittelpunkt. Das neu entwickelte Resilienz-Entscheidungsfindungsverfahren steht im Kern dieser Entwicklungen. Es basiert auf einem adaptierten systemischen Risikomaß, einer zeitabhängigen, probabilistischen Resilienzmetrik sowie einem Gittersuchalgorithmus und stellt eine bedeutende Innovation dar, da es Entscheidungsträgern ermöglicht, ein optimales Gleichgewicht zwischen verschiedenen Arten von Resilienz-steigernden Maßnahmen unter Berücksichtigung monetärer Aspekte zu identifizieren. Zunehmend weisen Systemkomponenten eine erhebliche Eigenkomplexität auf, was dazu führt, dass sie selbst als Systeme modelliert werden müssen. Hieraus ergeben sich Systeme aus Systemen mit hoher Komplexität. Um diese Herausforderung zu adressieren, wird eine neue Methodik abgeleitet, indem das zuvor eingeführte Resilienzrahmenwerk auf multidimensionale Anwendungsfälle erweitert und synergetisch mit einem etablierten Konzept aus der Zuverlässigkeitstheorie, der Überlebenssignatur, zusammengeführt wird. Der neue Ansatz kombiniert die Vorteile beider ursprünglichen Komponenten: Einerseits ermöglicht er einen direkten Vergleich verschiedener Resilienz-steigernder Maßnahmen aus einem mehrdimensionalen Suchraum, der zu einem optimalen Kompromiss in Bezug auf die Systemresilienz führt. Andererseits ermöglicht er durch die Separationseigenschaft der Überlebenssignatur eine signifikante Reduktion des Rechenaufwands. Sobald eine Subsystemstruktur berechnet wurde -- ein typischerweise rechenintensiver Prozess -- kann jede Charakterisierung des probabilistischen Ausfallverhaltens von Komponenten validiert werden, ohne dass die Struktur erneut berechnet werden muss. In der Realität sind Messungen, Expertenwissen sowie weitere Informationsquellen mit vielfältigen Unsicherheiten belastet. Hierfür wird eine effiziente Methode vorgeschlagen, die auf der Kombination von Überlebenssignatur, unscharfer Wahrscheinlichkeitstheorie und nicht-intrusiver stochastischer Simulation (NISS) basiert. Dadurch entsteht ein effizienter Ansatz zur Quantifizierung der Zuverlässigkeit komplexer Systeme unter Berücksichtigung des gesamten Unsicherheitsspektrums. Der neue Ansatz, der die vorteilhaften Eigenschaften seiner ursprünglichen Komponenten synergetisch zusammenführt, erreicht eine bedeutende Verringerung des Rechenaufwands aufgrund der Separationseigenschaft der Überlebenssignatur. Er erzielt zudem eine drastische Reduzierung der Stichprobengröße aufgrund der adaptierten NISS-Methode: Es wird nur eine einzige stochastische Simulation benötigt, um Unsicherheiten zu berücksichtigen. Die neue Methodik stellt nicht nur eine Neuerung auf dem Gebiet der Zuverlässigkeitsanalyse dar, sondern kann auch in das Resilienzrahmenwerk integriert werden. Für eine Resilienzanalyse von real existierenden Systemen ist die Berücksichtigung kontinuierlicher Komponentenfunktionalität unerlässlich. Diese wird in einer weiteren Neuentwicklung adressiert. Durch die Einführung der kontinuierlichen Überlebensfunktion und dem Konzept der Diagonal Approximated Signature als entsprechendes Ersatzmodell kann das bestehende Resilienzrahmenwerk sinnvoll erweitert werden, ohne seine grundlegenden Vorteile zu beeinträchtigen. Im Kontext der Regeneration komplexer Investitionsgüter wird ein umfassendes Analyserahmenwerk vorgestellt, um die Übertragbarkeit und Anwendbarkeit aller entwickelten Methoden auf komplexe Systeme jeglicher Art zu demonstrieren. Das Rahmenwerk integriert die zuvor entwickelten Methoden der Resilienz-, Zuverlässigkeits- und Unsicherheitsanalyse. Es bietet Entscheidungsträgern die Basis für die Identifikation resilienter Regenerationspfade in zweierlei Hinsicht: Zum einen im Sinne von Regenerationspfaden mit inhärenter Resilienz und zum anderen Regenerationspfade, die zu einer maximalen Systemresilienz unter Berücksichtigung technischer und monetärer Einflussgrößen des zu analysierenden komplexen Investitionsgutes führen. Zusammenfassend bietet diese Dissertation innovative Beiträge zur effizienten Resilienzanalyse und Entscheidungsfindung für komplexe Ingenieursysteme. Sie präsentiert universell anwendbare Methoden und Rahmenwerke, die flexibel genug sind, um beliebige Systemtypen und Leistungsmaße zu berücksichtigen. Dies wird in zahlreichen Fallstudien von willkürlichen Flussnetzwerken, funktionalen Modellen von Axialkompressoren bis hin zu substrukturierten Infrastruktursystemen mit mehreren tausend Einzelkomponenten demonstriert

    Optimization of systems reliability by metaheuristic approach

    Get PDF
    The application of metaheuristic approaches in addressing the reliability of systems through optimization is of greater interest to researchers and designers in recent years. Reliability optimization has become an essential part of the design and operation of largescale manufacturing systems. This thesis addresses the optimization of system-reliability for series–parallel systems to solve redundant, continuous, and combinatorial optimization problems in reliability engineering by using metaheuristic approaches (MAs). The problem is to select the best redundancy strategy, component, and redundancy level for each subsystem to maximize the system reliability under system-level constraints. This type of problem involves the selection of components with multiple choices and redundancy levels that yield the maximum benefits, and it is subject to the cost and weight constraints at the system level. These are very common and realistic problems faced in the conceptual design of numerous engineering systems. The development of efficient solutions to these problems is becoming progressively important because mechanical systems are becoming increasingly complex, while development plans are decreasing in size and reliability requirements are rapidly changing and becoming increasingly difficult to adhere to. An optimal design solution can be obtained very frequently and more quickly by using genetic algorithm redundancy allocation problems (GARAPs). In general, redundancy allocation problems (RAPs) are difficult to solve for real cases, especially in large-scale situations. In this study, the reliability optimization of a series–parallel by using a genetic algorithm (GA) and statistical analysis is considered. The approach discussed herein can be applied to address the challenges in system reliability that includes redundant numbers of carefully chosen modules, overall cost, and overall weight. Most related studies have focused only on the single-objective optimization of RAP. Multiobjective optimization has not yet attracted much attention. This research project examines the multiobjective situation by focusing on multiobjective formulation, which is useful in maximizing system reliability while simultaneously minimizing system cost and weight to solve the RAP. The present study applies a methodology for optimizing the reliability of a series–parallel system based on multiobjective optimization and multistate reliability by using a hybrid GA and a fuzzy function. The study aims to determine the strategy for selecting the degree of redundancy for every subsystem to exploit the general system reliability depending on the overall cost and weight limitations. In addition, the outcomes of the case study for optimizing the reliability of the series–parallel system are presented, and the relationships with previously investigated phenomena are presented to determine the performance of the GA under review. Furthermore, this study established a new metaheuristic-based technique for resolving multiobjective optimization challenges, such as the common reliability redundancy allocation problem. Additionally, a new simulation process was developed to generate practical tools for designing reliable series–parallel systems. Hence, metaheuristic methods were applied for solving such difficult and complex problems. In addition, metaheuristics provide a useful compromise between the amount of computation time required and the quality of the approximated solution space. The industrial challenges include the maximization of system reliability subject to limited system cost and weight, minimization of system weight subject to limited system cost and the system reliability requirements and increasing of quality components through optimization and system reliability. Furthermore, a real-life situation research on security control of a gas turbine in the overspeed state was explored in this study with the aim of verifying the proposed algorithm from the context of system optimization

    Multiple Fault Isolation in Redundant Systems

    Get PDF
    Fault diagnosis in large-scale systems that are products of modem technology present formidable challenges to manufacturers and users. This is due to large number of failure sources in such systems and the need to quickly isolate and rectify failures with minimal down time. In addition, for fault-tolerant systems and systems with infrequent opportunity for maintenance (e.g., Hubble telescope, space station), the assumption of at most a single fault in the system is unrealistic. In this project, we have developed novel block and sequential diagnostic strategies to isolate multiple faults in the shortest possible time without making the unrealistic single fault assumption

    Deployment Policies to Reliably Maintain and Maximize Expected Coverage in a Wireless Sensor Network

    Get PDF
    The long-term operation of a wireless sensor network (WSN) requires the deployment of new sensors over time to restore any loss in network coverage and communication ability resulting from sensor failures. Over the course of several deployment actions it is important to consider the cost of maintaining the WSN in addition to any desired performance measures such as coverage, connectivity, or reliability. The resulting problem formulation is approached first through a time-based deployment model in which the network is restored to a fixed size at periodic time intervals. The network destruction spectrum (D-spectrum) has been introduced to estimate reliability and is more commonly applied to a static network, rather than a dynamic network where new sensors are deployed over time. We discuss how the D-spectrum can be incorporated to estimate reliability of a time-based deployment policy and the features that allow a wide range of deployment policies to be evaluated in an efficient manner. We next focus on a myopic condition-based deployment model where the network is observed at periodic time intervals and a fixed budget is available to deploy new sensors with each observation. With a limited budget available the model must address the complexity present in a dynamic network size in addition to a dynamic network topology, and the dependence of network reliability on the deployment action. We discuss how the D-spectrum can be applied to the myopic condition-based deployment problem, illustrating the value of the D-spectrum in a variety of maintenance settings beyond the traditional static network reliability problem. From the insight of the time-based and myopic condition-based deployment models, we present a Markov decision process (MDP) model for the condition-based deployment problem that captures the benefit of an action beyond the current time period. Methodology related to approximate dynamic programming (ADP) and approximate value iteration algorithms is presented to search for high quality deployment policies. In addition to the time-based and myopic condition-based deployment models, the MDP model is one of the few addressing the repeated deployment of new sensors as well as an emphasis on network reliability. For each model we discuss the relevant problem formulation, methodology to estimate network reliability, and demonstrate the performance in a range of test instances, comparing to alternative policies or models as appropriate. We conclude with a stochastic optimization model focused on a slightly different objective to maximize expected coverage with uncertainty in where a sensor lands in the network. We discuss a heuristic solution method that seeks to determine an optimal deployment of sensors, present results for a wide range of network sizes and explore the impact of sensor failures on both the model formulation and resulting deployment policy
    • …
    corecore