9,189 research outputs found

    Resampling Strategies to Improve Surrogate Model-based Uncertainty Quantification - Application to LES of LS89

    Full text link
    Uncertainty Quantification (UQ) is receiving more and more attention for engineering applications in particular from robust optimization. Indeed, running a computer experiment only provides a limited knowledge in terms of uncertainty and variability of the input parameters. These experiments are often computationally expensive and surrogate models can be constructed to address this issue. The outcome of a UQ study is in this case directly correlated to the surrogate's quality. Thus, attention must be devoted to the Design of Experiments (DoE) to retrieve as much information as possible. This work presents two new strategies for parameter space resampling to improve a Gaussian Process (GP) surrogate model. These techniques indeed show an improvement of the predictive quality of the model with high dimensional analytical input functions. Finally, the methods are successfully applied to a turbine blade Large Eddy Simulation application: the aerothermal flow around the LS89 blade cascade.Comment: Accepted in International Journal for Numerical Methods in Fluid

    Efficient Construction of Local Parametric Reduced Order Models Using Machine Learning Techniques

    Full text link
    Reduced order models are computationally inexpensive approximations that capture the important dynamical characteristics of large, high-fidelity computer models of physical systems. This paper applies machine learning techniques to improve the design of parametric reduced order models. Specifically, machine learning is used to develop feasible regions in the parameter space where the admissible target accuracy is achieved with a predefined reduced order basis, to construct parametric maps, to chose the best two already existing bases for a new parameter configuration from accuracy point of view and to pre-select the optimal dimension of the reduced basis such as to meet the desired accuracy. By combining available information using bases concatenation and interpolation as well as high-fidelity solutions interpolation we are able to build accurate reduced order models associated with new parameter settings. Promising numerical results with a viscous Burgers model illustrate the potential of machine learning approaches to help design better reduced order models.Comment: 28 pages, 15 figures, 6 table

    Inferring causal impact using Bayesian structural time-series models

    Full text link
    An important problem in econometrics and marketing is to infer the causal impact that a designed market intervention has exerted on an outcome metric over time. This paper proposes to infer causal impact on the basis of a diffusion-regression state-space model that predicts the counterfactual market response in a synthetic control that would have occurred had no intervention taken place. In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including local trends, seasonality and the time-varying influence of contemporaneous covariates. Using a Markov chain Monte Carlo algorithm for posterior inference, we illustrate the statistical properties of our approach on simulated data. We then demonstrate its practical utility by estimating the causal effect of an online advertising campaign on search-related site visits. We discuss the strengths and limitations of state-space models in enabling causal attribution in those settings where a randomised experiment is unavailable. The CausalImpact R package provides an implementation of our approach.Comment: Published at http://dx.doi.org/10.1214/14-AOAS788 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Data-Driven Framework for Assessing Cold Load Pick-up Demand in Service Restoration

    Full text link
    Cold load pick-up (CLPU) has been a critical concern to utilities. Researchers and industry practitioners have underlined the impact of CLPU on distribution system design and service restoration. The recent large-scale deployment of smart meters has provided the industry with a huge amount of data that is highly granular, both temporally and spatially. In this paper, a data-driven framework is proposed for assessing CLPU demand of residential customers using smart meter data. The proposed framework consists of two interconnected layers: 1) At the feeder level, a nonlinear auto-regression model is applied to estimate the diversified demand during the system restoration and calculate the CLPU demand ratio. 2) At the customer level, Gaussian Mixture Models (GMM) and probabilistic reasoning are used to quantify the CLPU demand increase. The proposed methodology has been verified using real smart meter data and outage cases

    Plasticity models of material variability based on uncertainty quantification techniques

    Full text link
    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. We demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters

    Quantifying Uncertainties in Fault Slip Distribution during the T\=ohoku Tsunami using Polynomial Chaos

    Full text link
    An efficient method for inferring Manning's nn coefficients using water surface elevation data was presented in Sraj et al. (2014) focusing on a test case based on data collected during the ToˉhokuT\=ohoku earthquake and tsunami. Polynomial chaos expansions were used to build an inexpensive surrogate for the numerical model Geoclaw, which were then used to perform a sensitivity analysis in addition to the inversion. In this paper, a new analysis is performed with the goal of inferring the fault slip distribution of the ToˉhokuT\=ohoku earthquake using a similar problem setup. The same approach to constructing the PC surrogate did not lead to a converging expansion, however an alternative approach based on Basis-Pursuit DeNoising was found to be suitable. Our result shows that the fault slip distribution can be inferred using water surface elevation data whereas the inferred values minimizes the error between observations and the numerical model. The numerical approach and the resulting inversion are presented in this work

    Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Full text link
    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.Comment: Preprint 29 pages, 10 figures (26 small figures); v1 submitted to the AIAA Journal on May 3, 2017; v2 submitted on September 17, 2017. v2 changes: (a) addition of flowcharts in Figures 4 and 5 to summarize the tools used; (b) edits to clarify and reorganize certain parts; v3 submitted on February 7, 2018. v3 changes: (a) title; (b) minor edit

    Surrogate-based global sensitivity analysis for turbulence and fire-spotting effects in regional-scale wildland fire modeling

    Full text link
    In presence of strong winds, wildfires feature nonlinear behavior, possibly inducing fire-spotting. We present a global sensitivity analysis of a new sub-model for turbulence and fire-spotting included in a wildfire spread model based on a stochastic representation of the fireline. To limit the number of model evaluations, fast surrogate models based on generalized Polynomial Chaos (gPC) and Gaussian Process are used to identify the key parameters affecting topology and size of burnt area. This study investigates the application of these surrogates to compute Sobol' sensitivity indices in an idealized test case. The wind is known to drive the fire propagation. The results show that it is a more general leading factor that governs the generation of secondary fires. This study also compares the performance of the surrogates for varying size and type of training sets as well as for varying parameterization and choice of algorithms. The best performance was achieved using a gPC strategy based on a sparse least-angle regression (LAR) and a low-discrepancy Halton's sequence. Still, the LAR-based gPC surrogate tends to filter out the information coming from parameters with large length-scale, which is not the case of the cleaning-based gPC surrogate. For both algorithms, sparsity ensures a surrogate can be built using an affordable number of forward model evaluations, while the model response is highly multi-scale and nonlinear. Using a sparse surrogate is thus a promising strategy to analyze new models and its dependency on input parameters in wildfire applications

    Bayesian inference and non-linear extensions of the CIRCE method for quantifying the uncertainty of closure relationships integrated into thermal-hydraulic system codes

    Full text link
    Uncertainty Quantification of closure relationships integrated into thermal-hydraulic system codes is a critical prerequisite in applying the Best-Estimate Plus Uncertainty (BEPU) methodology for nuclear safety and licensing processes.The purpose of the CIRCE method is to estimate the (log)-Gaussian probability distribution of a multiplicative factor applied to a reference closure relationship in order to assess its uncertainty. Even though this method has been implemented with success in numerous physical scenarios, it can still suffer from substantial limitations such as the linearity assumption and the difficulty of properly taking into account the inherent statistical uncertainty. In the paper, we will extend the CIRCE method in two aspects. On the one hand, we adopt the Bayesian setting putting prior probability distributions on the parameters of the (log)-Gaussian distribution. The posterior distribution of the parameters is then computed with respect to an experimental database by means of Markov Chain Monte Carlo (MCMC) algorithms. On the other hand, we tackle the more general setting where the simulations do not move linearly against the multiplicative factor(s). MCMC algorithms then become time-prohibitive when the thermal-hydraulic simulations exceed a few minutes. This handicap is overcome by using Gaussian process (GP) emulators which can yield both reliable and fast predictions of the simulations. The GP-based MCMC algorithms will be applied to quantify the uncertainty of two condensation closure relationships at a safety injection with respect to a database of experimental tests. The thermal-hydraulic simulations will be run with the CATHARE 2 computer code.Comment: 37 pages, 5 figure

    Software quality and reliability prediction using Dempster -Shafer theory

    Get PDF
    As software systems are increasingly deployed in mission critical applications, accurate quality and reliability predictions are becoming a necessity. Most accurate prediction models require extensive testing effort, implying increased cost and slowing down the development life cycle. We developed two novel statistical models based on Dempster-Shafer theory, which provide accurate predictions from relatively small data sets of direct and indirect software reliability and quality predictors. The models are flexible enough to incorporate information generated throughout the development life-cycle to improve the prediction accuracy.;Our first contribution is an original algorithm for building Dempster-Shafer Belief Networks using prediction logic. This model has been applied to software quality prediction. We demonstrated that the prediction accuracy of Dempster-Shafer Belief Networks is higher than that achieved by logistic regression, discriminant analysis, random forests, as well as the algorithms in two machine learning software packages, See5 and WEKA. The difference in the performance of the Dempster-Shafer Belief Networks over the other methods is statistically significant.;Our second contribution is also based on a practical extension of Dempster-Shafer theory. The major limitation of the Dempsters rule and other known rules of evidence combination is the inability to handle information coming from correlated sources. Motivated by inherently high correlations between early life-cycle predictors of software reliability, we extended Murphy\u27s rule of combination to account for these correlations. When used as a part of the methodology that fuses various software reliability prediction systems, this rule provided more accurate predictions than previously reported methods. In addition, we proposed an algorithm, which defines the upper and lower bounds of the belief function of the combination results. To demonstrate its generality, we successfully applied it in the design of the Online Safety Monitor, which fuses multiple correlated time varying estimations of convergence of neural network learning in an intelligent flight control system
    corecore