7,584 research outputs found

    Distribution-free stochastic simulation methodology for model updating under hybrid uncertainties

    Get PDF
    In the real world, a significant challenge faced in the safe operation and maintenance of infrastructures is the lack of available information or data. This results in a large degree of uncertainty and the requirement for robust and efficient uncertainty quantification (UQ) tools in order to derive the most realistic estimates of the behavior of structures. While the probabilistic approach has long been utilized as an essential tool for the quantitative mathematical representation of uncertainty, a common criticism is that the approach often involves insubstantiated subjective assumptions because of the scarcity or imprecision of available information. To avoid the inclusion of subjectivity, the concepts of imprecise probabilities have been developed, and the distributional probability-box (p-box) has gained the most attention among various types of imprecise probability models since it can straightforwardly provide a clear separation between aleatory and epistemic uncertainty. This thesis concerns the realistic consideration and numerically efficient calibraiton and propagation of aleatory and epistemic uncertainties (hybrid uncertainties) based on the distributional p-box. The recent developments including the Bhattacharyya distance-based approximate Bayesian computation (ABC) and non-intrusive imprecise stochastic simulation (NISS) methods have strengthened the subjective assumption-free approach for uncertainty calibration and propagation. However, these methods based on the distributional p-box stand on the availability of the prior knowledge determining a specific distribution family for the p-box. The target of this thesis is hence to develop a distribution-free approach for the calibraiton and propagation of hybrid uncertainties, strengthening the subjective assumption-free UQ approach. To achieve the above target, this thesis presents five main developments to improve the Bhattacharyya distance-based ABC and NISS frameworks. The first development is on improving the scope of application and efficiency of the Bhattacharyya distance-based ABC. The dimension reduction procedure is proposed to evaluate the Bhattacharyya distance when the system under investigation is described by time-domain sequences. Moreover, the efficient Bayesian inference method within the Bayesian updating with structural reliability methods (BUS) framework is developed by combining BUS with the adaptive Kriging-based reliability method, namely AK-MCMC. The second development of the distribution-free stochastic model updating framework is based on the combined application of the staircase density functions and the Bhattacharyya distance. The staircase density functions can approximate a wide range of distributions arbitrarily close; hence the development achieved to perform the Bhattacharyya distance-based ABC without limiting hypotheses on the distribution families of the parameters having to be updated. The aforementioned two developments are then integrated in the third development to provide a solution to the latest edition (2019) of the NASA UQ challenge problem. The model updating tasks under very challenging condition, where prior information of aleatory parameters are extremely limited other than a common boundary, are successfully addressed based on the above distribution-free stochastic model updating framework. Moreover, the NISS approach that simplifies the high-dimensional optimization to a set of one-dimensional searching by a first-order high-dimensional model representation (HDMR) decomposition with respect to each design parameter is developed to efficiently solve the reliability-based design optimization tasks. This challenge, at the same time, elucidates the limitations of the current developments, hence the fourth development aims at addressing the limitation that the staircase density functions are designed for univariate random variables and cannot acount for the parameter dependencies. In order to calibrate the joint distribution of correlated parameters, the distribution-free stochastic model updating framework is extended by characterizing the aleatory parameters using the Gaussian copula functions having marginal distributions as the staircase density functions. This further strengthens the assumption-free approach for uncertainty calibration in which no prior information of the parameter dependencies is required. Finally, the fifth development of the distribution-free uncertainty propagation framework is based on another application of the staircase density functions to the NISS class of methods, and it is applied for efficiently solving the reliability analysis subproblem of the NASA UQ challenge 2019. The above five developments have successfully strengthened the assumption-free approach for both uncertainty calibration and propagation thanks to the nature of the staircase density functions approximating arbitrary distributions. The efficiency and effectiveness of those developments are sufficiently demonstrated upon the real-world applications including the NASA UQ challenge 2019

    Gaussian process hyper-parameter estimation using parallel asymptotically independent Markov sampling

    Get PDF
    Gaussian process emulators of computationally expensive computer codes provide fast statistical approximations to model physical processes. The training of these surrogates depends on the set of design points chosen to run the simulator. Due to computational cost, such training set is bound to be limited and quantifying the resulting uncertainty in the hyper-parameters of the emulator by uni-modal distributions is likely to induce bias. In order to quantify this uncertainty, this paper proposes a computationally efficient sampler based on an extension of Asymptotically Independent Markov Sampling, a recently developed algorithm for Bayesian inference. Structural uncertainty of the emulator is obtained as a by-product of the Bayesian treatment of the hyper-parameters. Additionally, the user can choose to perform stochastic optimisation to sample from a neighbourhood of the Maximum a Posteriori estimate, even in the presence of multimodality. Model uncertainty is also acknowledged through numerical stabilisation measures by including a nugget term in the formulation of the probability model. The efficiency of the proposed sampler is illustrated in examples where multi-modal distributions are encountered. For the purpose of reproducibility, further development, and use in other applications the code used to generate the examples is freely available for download at https://github.com/agarbuno/paims_codesComment: Computational Statistics \& Data Analysis, Volume 103, November 201

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    Rare event simulation in finite-infinite dimensional space

    Get PDF
    Modern engineering systems are becoming increasingly complex. Assessing their risk by simulation is intimately related to the efficient generation of rare failure events. Subset Simulation is an advanced Monte Carlo method for risk assessment and it has been applied in different disciplines. Pivotal to its success is the efficient generation of conditional failure samples, which is generally non-trivial. Conventionally an independent-component Markov Chain Monte Carlo (MCMC) algorithm is used, which is applicable to high dimensional problems (i.e., a large number of random variables) without suffering from ‘curse of dimension’. Experience suggests that the algorithm may perform even better for high dimensional problems. Motivated by this, for any given problem we construct an equivalent problem where each random variable is represented by an arbitrary (hence possibly infinite) number of ‘hidden’ variables. We study analytically the limiting behavior of the algorithm as the number of hidden variables increases indefinitely. This leads to a new algorithm that is more generic and offers greater flexibility and control. It coincides with an algorithm recently suggested by independent researchers, where a joint Gaussian distribution is imposed between the current sample and the candidate. The present work provides theoretical reasoning and insights into the algorithm

    Bayesian Network Enhanced with Structural Reliability Methods: Methodology

    Full text link
    We combine Bayesian networks (BNs) and structural reliability methods (SRMs) to create a new computational framework, termed enhanced Bayesian network (eBN), for reliability and risk analysis of engineering structures and infrastructure. BNs are efficient in representing and evaluating complex probabilistic dependence structures, as present in infrastructure and structural systems, and they facilitate Bayesian updating of the model when new information becomes available. On the other hand, SRMs enable accurate assessment of probabilities of rare events represented by computationally demanding, physically-based models. By combining the two methods, the eBN framework provides a unified and powerful tool for efficiently computing probabilities of rare events in complex structural and infrastructure systems in which information evolves in time. Strategies for modeling and efficiently analyzing the eBN are described by way of several conceptual examples. The companion paper applies the eBN methodology to example structural and infrastructure systems

    Seismische Risikobeurteilung von Tragwerken mittels stochastischer Simulationstechniken

    Get PDF
    In this Thesis, issues related to seismic risk assessment have been addressed. The first part focuses on the definition of the seismic hazard at a specific site. Its quantification mainly depends on the attenuation relation considered. A probabilistic content is usually associated with the attenuation law which quantifies its inherent uncertainty. In order to estimate a mathematical expression and rationally identify its uncertainty, a probabilistic approach has been proposed. The method is based on Bayesian Model Updating and Robust Predictive Analysis. The Bayesian updating problem has been solved by using two advanced Markov Chain Monte Carlo methods. Finally, the Robust Predictive Analysis has been implemented to account for all uncertainties involved in the attenuation law identification. The second part of the Thesis deals with the assessment of the failure probability for linear uncertain structural models subjected to earthquakes. A Bayesian Model Updating technique in the frequency domain with unknown non stationary input has been employed in order to quantify the posterior probability density function of the structural model parameters. The probability of exceeding a limit state has been evaluated for some damage scenarios. In the third part of the Thesis, the seismic risk has been evaluated for two nonlinear structural models. Two techniques have been compared. The first one is the IM-based approach which is typically employed in the probabilistic framework of the Performance Based Seismic Design, whereas the second technique is the Subset Simulation. Both deterministic and uncertain mechanical features for the models have been implemented. The effect of the structural model uncertainty has been successfully investigated.In dieser Dissertation sind die Punkte, die sich auf seismische Risikobeurteilung beziehen, angesprochen worden. Der erste Teil konzentriert sich auf die Definition der seismischen Gefahr an einem spezifischen Ort. Deren Quantifizierung ist hauptsächlich von der betrachteten Dämpfungsbeziehung abhängig. Normalerweise ist ein Wahrscheinlichkeitsgehalt mit dem Dämpfungsgesetz verknüpft, welcher die anhaftende Unsicherheit quantiviziert. Um einen Zahlenwert abzuschätzen und seine Unsicherheit rationell zu beschreiben, ist eine wahrscheinlichkeitstheoretischer Ansatz vorgeschlagen worden. Die Methode basiert auf dem Bayesian Updating und der Robust Predictive Analyse. Das Bayesian Updating Problem ist durch die Verwendung von zwei verbesserten Markov-Kette-Monte-Carlo-Methoden gelöst worden. Schließlich wurde die Robust Predictive Analyse implementiert, um alle im Abnahmegesetz einbezogen Unsicherheiten zu berücksichtigen. Der zweite Teil der Dissertation beschäftigt sich mit der Einschätzung der Versagenswahrscheinlichkeit von linearen unsicheren Tragwerksmodellen, die erdbebenbeansprucht sind. Um die a posteriori Wahrscheinlichkeitsdichtefunktionen der Parameter des Tragwerkmodells quantitativ zu bestimmen, kam ein Bayesian Updateing Verfahren im Frequenzgebiet mit unbekannten nichtstationären Eingabedaten zum Einsatz. Die Überschreitenswahrscheinlichkeit eines Grenzzustandes ist für einige Schadensszenarien ausgewertet worden. Im dritten Teil der Dissertation, wurde die seismische Gefahr für zwei nichtlineare Tragwerksmodelle eingeschätzt. Zwei Verfahren wurden verglichen. Das erste ist das IM-basierte Verfahren, das gewöhnlich im probabilistischen Rahmen des Performance Based Seismic Design eingesetzt wird. Die zweite Technik hingegen ist die Subset Simulation. Sowohl deterministische als auch probabilistische mechanische Eigenschaften sind in die Modelle implementiert worden. Die Auswirkung struktureller Modellunsicherheiten ist erfolgreich untersucht worden

    A method for obtaining the preventive maintenance interval in the absence of failure time data

    Get PDF
    One of the ways to reduce greenhouse gas emissions and other polluting gases caused by ships is to improve their maintenance operations through their life cycle. The maintenance manager usually does not modify the preventive intervals that the equipment manufacturer has designed to reduce the failure. Conditions of use and maintenance often change from design conditions. In these cases, continuing using the manufacturer's preventive intervals can lead to non-optimal management situations. This article proposes a new method to calculate the preventive interval when the hours of failure of the assets are unavailable. Two scenarios were created to test the effectiveness and usefulness of this new method, one without the failure hours and the other with the failure hours corresponding to a bypass valve installed in the engine of a maritime transport surveillance vessel. In an easy and fast way, the proposed method allows the maintenance manager to calculate the preventive interval of equipment that does not have installed an instrument for measuring operating hours installed
    corecore