14 research outputs found

    A probabilistic framework for source localization in anisotropic composite using transfer learning based multi-fidelity physics informed neural network (mfPINN)

    Get PDF
    The practical application of data-driven frameworks like deep neural network in acoustic emission (AE) source localization is impeded due to the collection of significant clean data from the field. The utility of the such framework is governed by data collected from the site and/or laboratory experiment. The noise, experimental cost and time consuming in the collection of data further worsen the scenario. To address the issue, this work proposes to use a novel multi-fidelity physics-informed neural network (mfPINN). The proposed framework is best suited for the problems like AE source detection, where the governing physics is known in an approximate sense (low-fidelity model), and one has access to only sparse data measured from the experiment (highfidelity data). This work further extends the governing equation of AE source detection to the probabilistic framework to account for the uncertainty that lies in the sensor measurement. The mfPINN fuses the data-driven and physics-informed deep learning architectures using transfer learning. The results obtained from the data-driven artificial neural network (ANN) and physicsinformed neural network (PINN) are also presented to illustrate the requirement of a multifidelity framework using transfer learning. In the presence of measurement uncertainties, the proposed method is verified with an experimental procedure that contains the carbon-fiberreinforced polymer (CFRP) composite panel instrumented with a sparse array of piezoelectric transducers. The results conclude that the proposed technique based on a probabilistic framework can provide a reliable estimation of AE source location with confidence intervals by taking measurement uncertainties into account

    A Generalized Probabilistic Learning Approach for Multi-Fidelity Uncertainty Propagation in Complex Physical Simulations

    Full text link
    Two of the most significant challenges in uncertainty propagation pertain to the high computational cost for the simulation of complex physical models and the high dimension of the random inputs. In applications of practical interest both of these problems are encountered and standard methods for uncertainty quantification either fail or are not feasible. To overcome the current limitations, we propose a probabilistic multi-fidelity framework that can exploit lower-fidelity model versions of the original problem in a small data regime. The approach circumvents the curse of dimensionality by learning dependencies between the outputs of high-fidelity models and lower-fidelity models instead of explicitly accounting for the high-dimensional inputs. We complement the information provided by a low-fidelity model with a low-dimensional set of informative features of the stochastic input, which are discovered by employing a combination of supervised and unsupervised dimensionality reduction techniques. The goal of our analysis is an efficient and accurate estimation of the full probabilistic response for a high-fidelity model. Despite the incomplete and noisy information that low-fidelity predictors provide, we demonstrate that accurate and certifiable estimates for the quantities of interest can be obtained in the small data regime, i.e., with significantly fewer high-fidelity model runs than state-of-the-art methods for uncertainty propagation. We illustrate our approach by applying it to challenging numerical examples such as Navier-Stokes flow simulations and monolithic fluid-structure interaction problems.Comment: 31 pages, 14 figure

    Multilevel Monte Carlo approach for estimating reliability of electric distribution systems

    Get PDF
    Most of the power outages experienced by the customers are due to the failures in the electric distribution systems. However, the ultimate goal of a distribution system is to meet customer electricity demand by maintaining a satisfactory level of reliability with less interruption frequency and duration as well as less outage costs. Quantitative evaluation of reliability is, therefore, a significant aspect of the decision-making process in planning and designing for future expansion of network or reinforcement. Simulation approach of reliability evaluation is generally based on sequential Monte Carlo (MC) method which can consider the random nature of system components. Use of MC method for obtaining accurate estimates of the reliability can be computationally costly particularly when dealing with rare events (i.e. when high accuracy is required). This thesis proposes a simple and effective methodology for accelerating MC simulation in distribution systems reliability evaluation. The proposed method is based on a novel Multilevel Monte Carlo (MLMC) simulation approach. MLMC approach is a variance reduction technique for MC simulation which can reduce the computational burden of the MC method dramatically while both sampling and discretisation errors are considered for converging to a controllable accuracy level. The idea of MLMC is to consider a hierarchy of computational meshes (levels) instead of using single time discretisation level in MC method. Most of the computational effort in MLMC method is transferred from the finest level to the coarsest one, leading to substantial computational saving. As the simulations are conducted using multiple approximations, therefore the less accurate estimate on the preceding coarse level can be sequentially corrected by averages of the differences of the estimations of two consecutive levels in the hierarchy. In this dissertation, we will find the answers to the following questions: can MLMC method be used for reliability evaluation? If so, how MLMC estimators for reliability evaluation are constructed? Finally, how much computational savings can we expect through MLMC method over MC method? MLMC approach is implemented through solving the stochastic differential equations of random variables related to the reliability indices. The differential equations are solved using different discretisation schemes. In this work, the performance of two different discretisation schemes, Euler-Maruyama and Milstein are investigated for this purpose. We use the benchmark Roy Billinton Test System as the test system. Based on the proposed MLMC method, a number of reliability studies of distribution systems have been carried out in this thesis including customer interruption frequency and duration based reliability assessment, cost/benefits estimation, reliability evaluation incorporating different time-varying factors such as weather-dependent failure rate and restoration time of components, time-varying load and cost models of supply points. The numerical results that demonstrate the computational performances of the proposed method are presented. The performances of the MLMC and MC methods are compared. The results prove that MLMC method is computationally efficient compared to those derived from standard MC method and it can retain an acceptable level of accuracy. The novel computational tool including examples presented in this thesis will help system planners and utility managers to provide useful information of reliability of distribution networks. With the help of such tool they can take necessary steps to speed up the decision-making process of reliability improvement.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 201

    MATHICSE Technical Report : Multilevel Monte Carlo approximation of functions

    Get PDF
    Many applications across sciences and technologies require a careful quantification of non-deterministic effects to a system output, for example when evaluating the system's reliability or when gearing it towards more robust operation conditions. At the heart of these considerations lies an accurate characterization of uncertain system outputs. In this work we introduce and analyze novel multilevel Monte Carlo techniques for an efficient characterization of an uncertain system output's distribution. These techniques rely on accurately approximating general parametric expectations, i.e. expectations that depend on a parameter, uniformly on an interval. Applications of interest include, for example, the approximation of the characteristic function and of the cumulative distribution function of an uncertain system output. A further important consequence of the introduced approximation techniques for parametric expectations (i.e. for functions) is that they allow to construct multilevel Monte Carlo estimators for various robustness indicators, such as for a quantile (also known as value-at-risk) and for the conditional value-at-risk. These robustness indicators cannot be expressed as moments and are thus not easily accessible usually. In fact, here we provide a framework that allows to simultaneously estimate a cumulative distribution function, a quantile, and the associated conditional value-at-risk of an uncertain system output at the cost of a single multilevel Monte Carlo simulation, while each estimated quantity satisfies a prescribed tolerance goal

    Multilevel Monte Carlo Approximation of Functions

    Get PDF
    Many applications across sciences and technologies require a careful quantification of nondeterministic effects to a system output, for example, when evaluating the system’s reliability or when gearing it towards more robust operation conditions. At the heart of these considerations lies an accurate characterization of uncertain system outputs. In this work we introduce and analyze novel multilevel Monte Carlo techniques for an efficient characterization of an uncertain system output’s distribution. These techniques rely on accurately approximating general parametric expectations, i.e., expectations that depend on a parameter, uniformly on an interval. Applications of interest include, for example, the approximation of the characteristic function and of the cumulative distribution function of an uncertain system output. A further important consequence of the introduced approximation techniques for parametric expectations (i.e., for functions) is that they allow us to construct multilevel Monte Carlo estimators for various robustness indicators, such as for a quantile (also known as value-at-risk) and for the conditional value-at-risk. These robustness indicators cannot be expressed as moments and are thus not usually easily accessible. In fact, here we provide a framework that allows us to simultaneously estimate a cumulative distribution function, a quantile, and the associated conditional value-at-risk of an uncertain system output at the cost of a single multilevel Monte Carlo simulation, while each estimated quantity satisfies a prescribed tolerance goal

    How to Rank Answers in Text Mining

    Get PDF
    In this thesis, we mainly focus on case studies about answers. We present the methodology CEW-DTW and assess its performance about ranking quality. Based on the CEW-DTW, we improve this methodology by combining Kullback-Leibler divergence with CEW-DTW, since Kullback-Leibler divergence can check the difference of probability distributions in two sequences. However, CEW-DTW and KL-CEW-DTW do not care about the effect of noise and keywords from the viewpoint of probability distribution. Therefore, we develop a new methodology, the General Entropy, to see how probabilities of noise and keywords affect answer qualities. We firstly analyze some properties of the General Entropy, such as the value range of the General Entropy. Especially, we try to find an objective goal, which can be regarded as a standard to assess answers. Therefore, we introduce the maximum general entropy. We try to use the general entropy methodology to find an imaginary answer with the maximum entropy from the mathematical viewpoint (though this answer may not exist). This answer can also be regarded as an “ideal” answer. By comparing maximum entropy probabilities and global probabilities of noise and keywords respectively, the maximum entropy probability of noise is smaller than the global probability of noise, maximum entropy probabilities of chosen keywords are larger than global probabilities of keywords in some conditions. This allows us to determinably select the max number of keywords. We also use Amazon dataset and a small group of survey to assess the general entropy. Though these developed methodologies can analyze answer qualities, they do not incorporate the inner connections among keywords and noise. Based on the Markov transition matrix, we develop the Jump Probability Entropy. We still adapt Amazon dataset to compare maximum jump entropy probabilities and global jump probabilities of noise and keywords respectively. Finally, we give steps about how to get answers from Amazon dataset, including obtaining original answers from Amazon dataset, removing stopping words and collinearity. We compare our developed methodologies to see if these methodologies are consistent. Also, we introduce Wald–Wolfowitz runs test and compare it with developed methodologies to verify their relationships. Depending on results of comparison, we get conclusions about consistence of these methodologies and illustrate future plans
    corecore