20 research outputs found

    A Generalized Bayesian Approach to Model Calibration

    Full text link
    In model development, model calibration and validation play complementary roles toward learning reliable models. In this article, we expand the Bayesian Validation Metric framework to a general calibration and validation framework by inverting the validation mathematics into a generalized Bayesian method for model calibration and regression. We perform Bayesian regression based on a user's definition of model-data agreement. This allows for model selection on any type of data distribution, unlike Bayesian and standard regression techniques, that "fail" in some cases. We show that our tool is capable of representing and combining least squares, likelihood-based, and Bayesian calibration techniques in a single framework while being able to generalize aspects of these methods. This tool also offers new insights into the interpretation of the predictive envelopes (also known as confidence bands) while giving the analyst more control over these envelopes. We demonstrate the validity of our method by providing three numerical examples to calibrate different models, including a model for energy dissipation in lap joints under impact loading. By calibrating models with respect to the validation metrics one desires a model to ultimately pass, reliability and safety metrics may be integrated into and automatically adopted by the model in the calibration phase

    Sensitivity of Stormwater Management Solutions to Spatial Scale

    Get PDF
    Urbanization has considerably altered natural hydrology of urban watersheds by increasing runoff volume, producing higher and faster peak flows, and reducing water quality. Efforts to minimize or avoid these impacts, for example by implementing low impact development (LID) practices, are gaining momentum. Designing effective and economical stormwater management practices at a watershed scale is challenging; LIDs are commonly designed at site scales, considering local hydrologic conditions (i.e., one LID at a time). A number of empirical studies have documented hydrologic and water quality improvements achieved by LIDs. However, watershed scale effectiveness of LIDs has not been well studied. Considering cost, effort, and practicality, computer modeling is the only viable approach to assess LID performance at a watershed scale. As such, the United States Environmental Protection Agency’s Stormwater Management Model (SWMM) was selected for this study. It is well recognized that model predictions are plagued by uncertainties that arise from the lack of quality data and inadequacy of the model to accurately simulate the watershed. To scrutinize sensitivity of prediction accuracies to spatial resolution, four SWMM models of different spatial detail were developed for the Ballona Creek watershed, a highly urbanized watershed in the Los Angeles Basin, as a case study. Detailed uncertainty analyses were carried out for each model to quantify their prediction uncertainties and to examine if a detailed model improves prediction accuracy. Results show that there is a limit to the prediction accuracy achieved by using detailed models. Three of the four models (i.e., all but the least detailed model) produced comparable prediction accuracy. This implies that devoting substantial resources on collecting very detailed data and building fine resolution watershed models may not be necessary, as models of moderate detail could suffice. If confirmed using other urban watersheds, this result could benefit stormwater managers and modelers. All four SWMM models were then used to evaluate hydrologic effectiveness of implementing bioretention cells at a watershed scale. Event based analyses, 1-year, 2-year, 5-year and 10-year storms of 24-hours were considered, as well as data from October 2005 to March 2010 for a continuous simulation. The runoff volume reductions achieved by implementing bioretention cells were not substantial for the event storms. For the continuous simulation analysis, however, about twenty percent reductions in runoff volume were predicted. These results are in-line with previous studies that have reported ineffectiveness of LIDs to reduce runoff volume and peak for less frequent but high intensity storm events

    새로운 보정 척도 확률잔차와 확률잔차를 적용한 저널베어링 회전체 시스템 동특성 모델의 통계적 검증

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 기계항공공학부 기계공학전공, 2016. 8. 윤병동.In constructing the computational model of engineered systems such as a journal bearing rotor systems, statistical model calibration method is often used since the statistical model emulates the actual behavior of the engineered systems with uncertainties. A calibration metric, which quantifies the degree of agreement or disagreement between computational and experimental results, is one of the key components in the statistical model calibration. However, some existing calibration metrics such as log-likelihood and Kullback-Leibler divergence (KLD) have limitations in constructing an accurate computational model. To overcome this problems, this study proposes a new calibration metric, probability residual (PR). The PR metric is defined as the sum of the product of scale factor and square of residuals. The scale factor scales the PDF in specific range, which enables to improve the calibration efficiency. The square of residuals makes the PR a convex form, which guarantees existence of global optimum. So as to evaluate the performance of the PR metric, this study uses mathematical models and employs statistical models of the journal bearing rotor system appropriate to normal and rubbing state. As a result, the PR metric performed better than other metrics including log-likelihood and KLD in terms of the calibration accuracy and efficiency, and the calibrated journal bearing rotor model with PR was proved in valid by the hypothesis testing. In summary, the proposed PR metric is promising to be applied in building an accurate computational model.Chapter 1. Introduction 1 1.1 Background and Motivation 1 1.2 Organization of Thesis 3 Chapter 2. Literature Review 5 2.1 Statistical Model Validation 5 2.1.1 Model Uncertainties 7 2.1.2 Statistical Model Calibration 9 2.1.3 Validity Check 12 2.2 Fault Diagnosis of a Journal Bearing Rotor System 15 Chapter 3. A New Calibration Metric Probability Residual (PR) 16 3.1 Review of Existing Calibration Metrics 16 3.1.1 Log-likelihood 17 3.1.2 Kullback-Leibler Divergence (KLD) 18 3.1.3 Limitation of log-likelihood and Kullback-Leibler Divergence (KLD) 19 3.2 Proposed Calibration Metric Probability Residual (PR) 21 3.2.1 Scale factor 22 3.2.2 Square of residuals 28 3.3 Performance Evaluation of the Calibration Metrics 29 3.3.1 Comparative study of calibration metrics in terms of accuracy 30 3.3.2 Comparative study of calibration metrics in terms of accuracy 31 Chapter 4. Case Study: Statistical Model Validation of a Journal Bearing Rotor System 33 4.1 Hierarchical Framework for Statistical Model Validation 33 4.1.1 Description of a Computational Model 33 4.1.2 Statistical Model Calibration in Normal State 34 4.1.3 Statistical Model Validation in Rubbing State 35 4.2 Discussion 38 Chapter 5. Conclusions 41 5.1 Contributions 41 5.2 Future Works 43 Bibliography 44 국문 초록 49Maste

    Statistical Model Validation for Reliable Design of Engineering Products

    Full text link
    Computer models have been used to simulate engineering product and system performances in applications such as vehicle crashworthiness, structural safety, thermal responses, etc. If these predictions were accurate in the product and system design space, these models can help reduce product development cycle, cut down the cost of physical tests, and identify the optimal design. However, models are built on assumptions and simplifications. Therefore, model prediction could be problematic without referring to the corresponding test data. More importantly, design errors could be created because of the model error. Model validation is to determine the degree to which the model is an accurate representation of the real world from the perspective of the intended uses of the model and is a critical process to ensure the improved design efficiency and accuracy while minimizing the overall design cost. The objective of this dissertation is to study a systematic and practical model validation framework for the design of engineering products. To achieve this goal, five research thrusts are developed. First of all, a copula-based model bias characterization approach is developed to capture the relationship between model inputs, outputs, and the model bias. The contribution is to overcome the limitations of regression-based model bias modeling approaches including: i) the curse of dimensionality; ii) assumption of regression forms; and iii) low accuracy to the model outputs with unexplained portion of model bias defined by model parameters. Secondly, an adaptive copula-based model bias characterization approach is developed to further enhance the accuracy of the copula-based approach with the aid of clustering analysis. Thirdly, a novel validation metric for dynamic responses under uncertainty is developed so that model accuracy with dynamic responses can be quantitatively assessed considering limited test data. Fourthly, a stochastic model bias calibration and approximation approach is proposed with the aid of the developed dynamic validation metric for reliability analysis. Finally, reliability-based design optimization is integrated with the proposed model uncertainty characterization approach for reliable design of various engineering products. Various numerical examples and practical engineering problems are employed to demonstrate the proposed model validation framework for designing reliable engineering products.Ph.D.CECS Automotive Systems EngineeringUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/134042/1/Hao Pan Final Dissertation.pdfDescription of Hao Pan Final Dissertation.pdf : Dissertatio

    Verification and validation using state of the art measures and modular uncertainty techniques

    Get PDF
    As quantitative validation measures have become available, so has the controversy regarding the construction of such measures. The complexity of the physical processes involved is compounded by uncertainties introduced due to model inputs, experimental errors, and modeling assumptions just to name a few. Also, how these uncertainties are treated is of major importance. In this dissertation, the issues associated with several state of the art quantitative validation metrics are discussed in detail. Basic Verification and Validation (V&V) framework is introduced outlining areas where some agreement has been reached in the engineering community. In addition, carefully constructed examples are used to shed light on differences among the state of the art validation metrics. The results show that the univariate validation metric fails to account for correlation structure due to common systematic error sources in the comparison error results. Also, the confidence interval metric is an inadequate measure of the noise level of the validation exercise. Therefore, the multivariate validation metric should be utilized whenever possible. In addition, end-to-end examples of the V&V effort are provided using the multivariate and univariate validation metrics. Methodology is introduced using Monte Carlo analysis to construct the covariance matrix used in the multivariate validation metric when non-linear sensitivities exist. Also, the examples show how multiple iterations of the validation exercise can lead to a successful validation effort. Finally, modular uncertainty techniques are introduced for the uncertainty analysis of large systems where many data reduction equations or models are used to examine multiple outputs of interest. In addition, the modular uncertainty methodology was shown to be an equivalent method to the traditional propagation of errors approach with a drastic reduction in computational effort. The modular uncertainty technique also has the advantage in that insight is given into the relationship between the uncertainties of the quantities of interest being examined. An extension of the modular uncertainty methodology to cover full scale V&V exercises is also introduced

    Verification and validation using state of the art measures and modular uncertainty techniques

    Get PDF
    As quantitative validation measures have become available, so has the controversy regarding the construction of such measures. The complexity of the physical processes involved is compounded by uncertainties introduced due to model inputs, experimental errors, and modeling assumptions just to name a few. Also, how these uncertainties are treated is of major importance. In this dissertation, the issues associated with several state of the art quantitative validation metrics are discussed in detail. Basic Verification and Validation (V&V) framework is introduced outlining areas where some agreement has been reached in the engineering community. In addition, carefully constructed examples are used to shed light on differences among the state of the art validation metrics. The results show that the univariate validation metric fails to account for correlation structure due to common systematic error sources in the comparison error results. Also, the confidence interval metric is an inadequate measure of the noise level of the validation exercise. Therefore, the multivariate validation metric should be utilized whenever possible. In addition, end-to-end examples of the V&V effort are provided using the multivariate and univariate validation metrics. Methodology is introduced using Monte Carlo analysis to construct the covariance matrix used in the multivariate validation metric when non-linear sensitivities exist. Also, the examples show how multiple iterations of the validation exercise can lead to a successful validation effort. Finally, modular uncertainty techniques are introduced for the uncertainty analysis of large systems where many data reduction equations or models are used to examine multiple outputs of interest. In addition, the modular uncertainty methodology was shown to be an equivalent method to the traditional propagation of errors approach with a drastic reduction in computational effort. The modular uncertainty technique also has the advantage in that insight is given into the relationship between the uncertainties of the quantities of interest being examined. An extension of the modular uncertainty methodology to cover full scale V&V exercises is also introduced

    Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design

    Get PDF
    Traditional uncertainty quantification techniques in simulation-based analysis and design focus upon on the quantification of parametric uncertainties-inherent natural variations of the input variables. This is done by developing a representation of the uncertainties in the parameters and then efficiently propagating this information through the modeling process to develop distributions or metrics regarding the output responses of interest. However, in problems with complex or newer modeling methodologies, the variabilities induced by the modeling process itself-known collectively as model-form and predictive uncertainty-can become a significant, if not greater source of uncertainty to the problem. As such, for efficient and accurate uncertainty measurements, it is necessary to consider the effects of these two additional forms of uncertainty along with the inherent parametric uncertainty. However, current methods utilized for parametric uncertainty quantification are not necessarily viable or applicable to quantify model-form or predictive uncertainties. Additionally, the quantification of these two additional forms of uncertainty can require the introduction of additional data into the problem-such as experimental data-which might not be available for particular designs and configurations, especially in the early design-stage. As such, methods must be developed for the efficient quantification of uncertainties from all sources, as well as from all permutations of sources to handle problems where a full array of input data is unavailable. This work develops and applies methods for the quantification of these uncertainties with specific application to the simulation-based analysis of aeroelastic structures
    corecore