9 research outputs found

    New Method for Complex Network Reliability Analysis through Probability Propagation

    Get PDF
    Reliability analysis of complex networks is often limited by increasing dimensionality of the problem as the number of nodes and possible paths in the network increases. This is true particularly for reliability analysis problems that exponentially increase in computational requirements with system size. In this paper, we present a new method for complex network reliability analysis. We call this the probability propagation method (PrPm). The idea originates from the concept of belief propagation for inference in network graphs. In PrPm, the message passed between nodes is a joint probability distribution. At each step, the distribution is updated and passed as the message to its direct neighbors. After the message passes to the terminal node, an estimation of the network reliability is found. The method results in an analytical solution for system reliability. We present the derived updating rules for message passing and apply the method to two test applications: a system distribution network and general grid network. In the message passing, some approximations are made. Results from the applications show high accuracy for the proposed method compared to exact solutions where possible for comparison. In addition, PrPm achieves orders of magnitude increases in computational efficiency compared to existing approaches. This includes reducing the computational cost for analyses from an exponential increase in computation time with the size of the system to a quartic increase. The method enables the accurate and computationally tractable calculation of failure probabilities of large, generally connected systems

    System reliability analysis of structures subjected to fatigue induced sequential failures using evolutionary algorithms

    Get PDF
    In the past, many catastrophic failures have occurred due to lack of redundancy and managerial oversight. For example, it was found that local failures due to improper welds that connected the suspended truss to the anchor trusses caused the collapse of the Grand Sung-Soo Bridge in Seoul, South Korea on October 21, 1994. Due to a lack of structural redundancy, the initial bridge rib failure was followed by other bridge failures leading to system collapse. With proper system reliability analysis, such cascading failures could be foreseen by stakeholders. To help make better risk-informed decisions, system reliability methods have been developed to analyze general structures subjected to the risk of cascading system-level failures caused by local fatigue-induced failures. For efficient reliability analysis of such complex system problems, many research efforts have been made to identify critical failure sequences with significant likelihoods by an event-tree search coupled with system reliability analyses: however, this approach is time-consuming or intractable due to repeated calculations of the probabilities of innumerable failure modes, which often necessitates using heuristic assumptions or simplifications. Recently, a decoupled approach was proposed: critical failure modes are first identified in the space of random variables without system reliability analyses or an event-tree search, then an efficient system reliability analysis was performed to compute the system failure probability based on the identified modes. In order to identify critical failure modes in the decreasing order of their relative contributions to the system failure probability, a simulation-based selective searching technique was developed by use of a genetic algorithm. The system failure probability was then computed by a multi-scale system reliability method that can account for the statistical dependence among the component events as well as among the identified failure modes. Part of this work presents this decoupled approach in detail and demonstrates its applicability to complex bridge structural systems that are subjected to the risk of cascading failures induced by fatigue. Using a recursive formulation for describing limit-states of local fatigue cracking, the system failure event is described as a disjoint cut-set event. Critical cut-sets, i.e. failure sequences with significant likelihood are identified by the selective searching technique using a genetic algorithm. Then, the probabilities of the cut-sets are computed by use of crude Monte Carlo simulations. Owing to the mutual exclusiveness of the cut-sets, the lower-bound on the system cascading failure probability is obtained by a simple addition of the cut-set probabilities. A numerical example of a bridge structure demonstrates that the proposed search method skillfully identifies the dominant failure modes contributing most to the system failure probability, and the system reliability analysis method accurately evaluates the system failure probability with statistical dependence fully considered. An example bridge with approximately 100 truss elements is considered to investigate the applicability of the method to realistic large-size structures. The efficiency and accuracy of the method are demonstrated through comparison with Monte Carlo simulations. The aforementioned system reliability analysis is based off of an a priori inspection cycle time and computes the probability that the time until the system failure is smaller than the given inspection cycle. Since most field practitioners do not know this value beforehand, a new method has been developed to perform simplified reliability analysis for many performance levels simultaneously. The First-Order Reliability Method (FORM) is often used for structural reliability analysis. The proposed method uses a multi-objective genetic algorithm, called Non-dominated based Sorting Genetic Algorithm II (NSGA II) to perform many FORM analyses simultaneously to generate a Pareto Surface of design points. From this Pareto surface, data on cases of ???critical but unlikely failures??? for short inspection cycle times and cases of ???less-critical but highly likely failures??? for long inspection cycle times can be found at once. From the nature of this method, this approach is termed as ???Multi-Objective??? FORM. Part of this work presents this Multi-objective FORM in detail. The applicability of this approach is shown through two numerical examples. The first example is a general situation with few random variables. The second example analyzes a statically indeterminate truss subjected to cyclic loading. Both numerical examples are validated with crude-MCS results and show that the method can find a full Pareto Surface, which provides reliability analysis results at a range of performance levels along with the probability distribution of the performance quantity

    Advanced methodologies for reliability-based design optimization and structural health prognostics

    Get PDF
    Failures of engineered systems can lead to significant economic and societal losses. To minimize the losses, reliability must be ensured throughout the system's lifecycle in the presence of manufacturing variability and uncertain operational conditions. Many reliability-based design optimization (RBDO) techniques have been developed to ensure high reliability of engineered system design under manufacturing variability. Schedule-based maintenance, although expensive, has been a popular method to maintain highly reliable engineered systems under uncertain operational conditions. However, so far there is no cost-effective and systematic approach to ensure high reliability of engineered systems throughout their lifecycles while accounting for both the manufacturing variability and uncertain operational conditions. Inspired by an intrinsic ability of systems in ecology, economics, and other fields that is able to proactively adjust their functioning to avoid potential system failures, this dissertation attempts to adaptively manage engineered system reliability during its lifecycle by advancing two essential and co-related research areas: system RBDO and prognostics and health management (PHM). System RBDO ensures high reliability of an engineered system in the early design stage, whereas capitalizing on PHM technology enables the system to proactively avoid failures in its operation stage. Extensive literature reviews in these areas have identified four key research issues: (1) how system failure modes and their interactions can be analyzed in a statistical sense; (2) how limited data for input manufacturing variability can be used for RBDO; (3) how sensor networks can be designed to effectively monitor system health degradation under highly uncertain operational conditions; and (4) how accurate and timely remaining useful lives of systems can be predicted under highly uncertain operational conditions. To properly address these key research issues, this dissertation lays out four research thrusts in the following chapters: Chapter 3 - Complementary Intersection Method for System Reliability Analysis, Chapter 4 - Bayesian Approach to RBDO, Chapter 5 - Sensing Function Design for Structural Health Prognostics, and Chapter 6 - A Generic Framework for Structural Health Prognostics. Multiple engineering case studies are presented to demonstrate the feasibility and effectiveness of the proposed RBDO and PHM techniques for ensuring and improving the reliability of engineered systems within their lifecycles

    지역단위 지진 리스크 평가를 위한효율적 시뮬레이션 기반 접근법

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 건설환경공학부, 2017. 2. 송준호.A research is required to evaluate seismic risk of community on the probabilistic framework. That is because (a) spatially distributed buildings/infrastructures are critical assets in urban community and (b) there are uncertainties in both natural hazard and structural behavior. While Monte Carlo Simulation (MCS) has been a solution to assess risk at community-level, MCS is not always efficient solution especially when severe hazardous scenarios should be investigated. Although Monte-Carlo simulation (MCS) provides straightforward environment to evaluate seismic risk on the urban assets, it requires a large computational cost to forecast rare event. Such catastrophic situations should be identified to keep our community being sustainable even after urban disasters. To overcome this issue, this thesis proposes alternative simulation-based approaches for probabilistic seismic risk assessment at community-level, (a) cross-entropy-based concurrent adaptive importance sampling (CE-CAIS) and (b) clustering-based approach. These new techniques are designed to establish computationally efficient frameworks for probabilistic seismic risk assessment on urban community. In Chapter 2, CE-CAIS is introduced to identify seismic risk on multi-state, large-scale systems with two dimensionality reduction techniques to expand the applicability of CE-CAIS. In Chapter 3, the clustering-based approach is demonstrated to forecast seismic risk on complex urban road networks with decreased computing resources. Several numerical examples are attached on each chapter to validate our proposals. Through this thesis, further researches are expected to produce other valuable achievements while increasing the communication with urban disaster and resilience.Chapter 1. Introduction 1 1.1. Study Background 1 1.2. Purpose of Research 2 Chapter 2. Probabilistic risk assessment of multi-state, large-scale systems using cross-entropy-based adaptive importance sampling 4 2.1. Introduction 4 2.2. Overview on fundamental methodologies 6 2.2.1. Cross-entropy-based adaptive importance sampling 6 2.2.2. Probabilistic risk assessment for urban community 8 2.3. Concurrent adaptive importance sampling 11 2.4. Post-hazard Traffic Flow Capacity of Hypothetical Road Network 12 2.4.1. Matrix-based System Reliability Method 13 2.4.2. Procedure of efficient sampling 14 2.4.3. Results 15 2.5. Dimensionality reduction techniques 20 2.5.1. Principle component analysis 20 2.5.2. Central limit theorem 22 2.6. Aggregated regional monetary loss in Shelby County 23 2.6.1. Damage factor and Loss Estimation for individual building 25 2.6.2. Procedure of efficient sampling 26 2.6.3. Results 28 2.7. Conclusion 34 Appendix 2A. Simulation of Seismic Hazard on Uncorrelated Standard Normal Space 34 2A.1. Magnitude-frequency relationship 35 2A.2. Realization of rupture surface 36 Appendix 2B. Updating Rules for CE-AIS-GM 37 2B.1. Optimal Importance Sampling Density 38 2B.2. Cross-entropy-based Updating Rule with Using Gaussian Mixture 38 2B.3. Initial parameters to implement CE-CAIS 41 Chapter 3. Feature selection and clustering-based approach for complex lifeline network 42 3.1. Introduction 42 3.2. Probabilistic Seismic Risk Assessment for Lifeline Network 45 3.2.1. Randomness in ground-motion prediction 45 3.2.2. Uncertainties in structural behavior 48 3.2.3. Network risk assessment in the context of utility 49 3.3. Feature Selection and Clustering-based Approach 52 3.3.1. Feature selection: proxy measures 53 3.3.2. Clustering-based approach for PRA 55 3.4. Post-hazard Traffic Flow of Hypothetical Road Network 56 3.5. Traffic Network on Bay Area, San Francisco 62 3.6. Conclusion 69 Appendix 3A. Scenario studies on Bay Area road network example 70 Chapter 4. Conclusion 75 Bibliography 77 Abstract 80Maste

    Stochastic Renewal Process Models for Structural Reliability Analysis

    Get PDF
    Reliability analysis in structural engineering is utilized in the initial design phase and its application continues throughout the service life of a structural system in form of maintenance planning and optimization. Engineering structures are usually designed with extremely high reliability and with a long service life. However, deterioration with time and exposure to external hazards like earthquakes, strong winds etc., increase the structure's vulnerability to failure. In structural reliability analysis, stochastic processes have been utilized to model timedependent uncertain variations in environmental loads and structural resistance. The Homogeneous Poisson Process (HPP) is most commonly used as the driving process behind environmental hazards and shocks causing structural deterioration. The HPP model is justi ed on account of an asymptotic argument that exceedances of a process to a high threshold over a long lifetime converge to HPP model. This approach serves the purpose at the initial design stages. The combination of stochastic loads is an important part of design load estimation. Currently, solutions of the load combination problem are also based on HPP shock and pulse processes. The deterioration is typically modelled as a random variable problem, instead of a stochastic process. Among stochastic models of deterioration, the gamma process is popularly used. The reliability evaluation by combining a stochastic load process with a stochastic process of deterioration, such as gamma process, is a very challenging problem, and so its discussion is quite limited in the existing literature. In case of reliability assessment of existing structures, such as nuclear power plants nearing the end of life, an indiscriminate use of HPP load models becomes questionable as asymptotic arguments may not be valid over a short remaining life. Thus, this thesis aims to generalize stochastic models used in the structural reliability analysis by considering more general models of environmental hazards based on the theory of the renewal process. These models include shock, pulse and alternating processes. The stochastic load combination problem is also solved in a more general setting by considering a renewal pulse process in combination with a Poisson shock process. The thesis presents a clear exposition of the stochastic load and strength combination problem. Several numerical algorithms have been developed to compute the stochastic reliability solution, and results have been compared with existing approximations. Naturally, existing approximations serve adequately in the routine design. However, in case of critical structures with high consequences to safety and reliability, the use of proposed methods would provide a more realistic assessment of structural reliability. In summary, the results presented in this thesis contribute to the advancement in stochastic modeling of structural reliability analysis problems

    Quantifizierung der Zuverlässigkeit und Komponentenbedeutung von Infrastrukturen unter Berücksichtigung von Naturkatastropheneinwirkung

    Get PDF
    The central topic is the quantification of the reliability of infrastructure networks subject to extreme wind loads. Random fields describe the wind distributions and calibrated fragility curves yield the failure probabilities of the components as a function of the wind speed. The network damage is simulated taking into account possible cascading component failures. Defined "Importance Measures" prioritize the components based on their impact on system reliability - the basis for system reliability improvement measures.Zentrales Thema ist die Quantifizierung der Zuverlässigkeit von Infrastrukturnetzen unter Einwirkung extremer Windlasten. Raumzeitliche Zufallsfelder beschreiben die Windverteilungen und spezifisch kalibrierte Fragilitätskurven ergeben die Versagenswahrscheinlichkeiten der Komponenten. Der Netzwerkschaden wird unter Berücksichtigung von kaskadierenden Komponentenausfällen simuliert. Eigens definierte „Importance Measures“ priorisieren die Komponenten nach der Stärke ihres Einflusses auf die Systemzuverlässigkeit - die Basis für Verbesserungen der Systemzuverlässigkeit

    Contribution à la prise en compte d'exigences dynamiques en conception préliminaire de systèmes complexes

    Get PDF
    Cette thèse traite de problématique de dimensionnement d'un système technique complexe. L'objectif est de proposer et d'outiller un processus de conception selon lequel le dimensionnement statique de l'architecture initiale d'un système satisfait dès le début les exigences statiques et dynamiques sans nécessité de redimensionnement. Ainsi, nous avons proposé une nouvelle démarche de conception dans laquelle la prise en compte des exigences statiques et dynamiques est effectuée de maniéré simultanée et globale dans la phase de conception préliminaire. Cette démarche se base sur les exigences pour déterminer les solutions admissibles et utilise des méthodes de résolution ensemblistes telles que la méthode de calcul par intervalle et la méthode de propagation par contraintes. En effet, les variables de conception sont exprimées par intervalles et les exigences statiques et dynamiques sont implémentées dans un même modèle NCSP. Les exigences dynamiques sont plus difficiles à intégrer. Il s'agit des exigences fonctionnelles du système, de la résonance et des critères de stabilité, de commandabilité et de transmittance. Dans un premier temps, nous avons réussi à intégrer le comportement dynamique d'un système technique sous forme d'équation différentielle ordinaire par intervalles et dans un deuxième temps, nous avons traduit les exigences dynamiques sous forme de contraintes algébriques définies par un ensemble d'équations et inéquations. La solution générée représente les valeurs admissibles des variables de conception satisfaisant simultanément les exigences statiques et dynamiques imposées. Ce couplage entre le dimensionnement statique et dynamique dans l'approche de conception proposée permet d'éviter le sur-dimensionnement puisque les exigences dynamiques interviennent dans le choix des coefficients de sécurité, et d'éviter les boucles de redimensionnement en cas d'échec ce qui permet de gagner en temps de calcul et de réduire le coût de conception. La démarche de conception proposée est validée par application sur le cas de dimensionnement d'un système de suspension active MacPherson.This thesis deals with design problems of a complex technical system. The objective is to find a design process which the static design of the initial architecture of a system meets from the first static and dynamic requirements with no need to resize it. Thus, we propose a new design approach which the consideration of static and dynamic requirements is done simultaneously and globally in the preliminary design phase. This approach is based on the requirements to determine admissible solutions and uses set-based methods such as interval computation and constraint propagation. Indeed, the design variables are expressed by intervals and the static and dynamic requirements are implemented in a NCSP model. The dynamic requirements are more difficult to integrate. They represent the functional requirements of the system, the resonance and stability criteria, controllability and transmittance. On the one hand, we succeed to integrate the dynamic behavior of a technical system in the form of ordinary differential equation by intervals. On the other hand, we formalize the dynamic requirements in the form of algebraic constraints defined by a set of equations and inequalities. The generated solution is the set of acceptable values of design variables satisfying simultaneously static and dynamic requirements. This coupling between the static and dynamic sizing steps in the proposed design approach avoids over- sizing of the system as the dynamic requirements involved in the choice of safety factors. Il also avoid resizing loops in case of failure, which saves significant computation time and reduce the cost of design. The proposed design approach is applied on the sizing of a MacPherson active suspension system.CHATENAY MALABRY-Ecole centrale (920192301) / SudocSudocFranceF

    RELIABILITY AND RISK ASSESSMENT OF NETWORKED URBAN INFRASTRUCTURE SYSTEMS UNDER NATURAL HAZARDS

    Get PDF
    Modern societies increasingly depend on the reliable functioning of urban infrastructure systems in the aftermath of natural disasters such as hurricane and earthquake events. Apart from a sizable capital for maintenance and expansion, the reliable performance of infrastructure systems under extreme hazards also requires strategic planning and effective resource assignment. Hence, efficient system reliability and risk assessment methods are needed to provide insights to system stakeholders to understand infrastructure performance under different hazard scenarios and accordingly make informed decisions in response to them. Moreover, efficient assignment of limited financial and human resources for maintenance and retrofit actions requires new methods to identify critical system components under extreme events. Infrastructure systems such as highway bridge networks are spatially distributed systems with many linked components. Therefore, network models describing them as mathematical graphs with nodes and links naturally apply to study their performance. Owing to their complex topology, general system reliability methods are ineffective to evaluate the reliability of large infrastructure systems. This research develops computationally efficient methods such as a modified Markov Chain Monte Carlo simulations algorithm for network reliability, and proposes a network reliability framework (BRAN: Bridge Reliability Assessment in Networks) that is applicable to large and complex highway bridge systems. Since the response of system components to hazard scenario events are often correlated, the BRAN framework enables accounting for correlated component failure probabilities stemming from different correlation sources. Failure correlations from non-hazard sources are particularly emphasized, as they potentially have a significant impact on network reliability estimates, and yet they have often been ignored or only partially considered in the literature of infrastructure system reliability. The developed network reliability framework is also used for probabilistic risk assessment, where network reliability is assigned as the network performance metric. Risk analysis studies may require prohibitively large number of simulations for large and complex infrastructure systems, as they involve evaluating the network reliability for multiple hazard scenarios. This thesis addresses this challenge by developing network surrogate models by statistical learning tools such as random forests. The surrogate models can replace network reliability simulations in a risk analysis framework, and significantly reduce computation times. Therefore, the proposed approach provides an alternative to the established methods to enhance the computational efficiency of risk assessments, by developing a surrogate model of the complex system at hand rather than reducing the number of analyzed hazard scenarios by either hazard consistent scenario generation or importance sampling. Nevertheless, the application of surrogate models can be combined with scenario reduction methods to improve even further the analysis efficiency. To address the problem of prioritizing system components for maintenance and retrofit actions, two advanced metrics are developed in this research to rank the criticality of system components. Both developed metrics combine system component fragilities with the topological characteristics of the network, and provide rankings which are either conditioned on specific hazard scenarios or probabilistic, based on the preference of infrastructure system stakeholders. Nevertheless, they both offer enhanced efficiency and practical applicability compared to the existing methods. The developed frameworks for network reliability evaluation, risk assessment, and component prioritization are intended to address important gaps in the state-of-the-art management and planning for infrastructure systems under natural hazards. Their application can enhance public safety by informing the decision making process for expansion, maintenance, and retrofit actions for infrastructure systems

    Bayesian networks for the multi-risk assessment of road infrastructure

    Get PDF
    The purpose of this study is to develop a methodological framework for the multi-risk assessment of road infrastructure systems. Since the network performance is directly linked to the functional states of its physical elements, most efforts are devoted to the derivation of fragility functions for bridges exposed to potential earthquake, flood and ground failure events. Thus, a harmonization effort is required in order to reconcile fragility models and damage scales from different hazard types. The proposed framework starts with the inventory of the various hazard-specific damaging mechanisms or failure modes that may affect each bridge component (e.g. piers, deck, bearings). Component fragility curves are then derived for each of these component failure modes, while corresponding functional consequences are proposed in a component-level damage-functionality matrix, thanks to an expert-based survey. Functionality-consistent failure modes at the bridge level are then assembled for specific configurations of component damage states. Finally, the development of a Bayesian Network approach enables the robust and efficient derivation of system fragility functions that (i) directly provide probabilities of reaching functionality losses and (ii) account for multiple types of hazard loadings and multi-risk interactions. At the network scale, a fully probabilistic approach is adopted in order to integrate multi-risk interactions at both hazard and fragility levels. A temporal dimension is integrated to account for joint independent hazard events, while the hazard-harmonized fragility models are able to capture cascading failures. The quantification of extreme events cannot be achieved by conventional sampling methods, and therefore the inference ability of Bayesian Networks is investigated as an alternative. Elaborate Bayesian Network formulations based on the identification of link sets are benchmarked, thus demonstrating the current computational difficulties to treat large and complex systems
    corecore