31 research outputs found

    Calibration Bayésienne et évaluation de modèles et expériences d’interaction gaz-surface pour les plasmas d'entrée atmosphérique

    Get PDF
    The investigation of gas-surface interaction phenomena for atmospheric entry vehicles relies on the development of predictive theoretical models and the capabilities of current experimental facilities. However, due to the complexity of the physics and the various phenomena that need to be investigated in ground-testing facilities, both numerical and experimental processes generate data subjected to uncertainties. Nevertheless, it remains a common practice in the field of aerothermodynamics to resort to calibration and validation methods that are not apt for rigorous uncertainty treatment.This thesis investigates the process of scientific inference and its ramifications for selected gas-surface interaction experiments. Its main contributions are the improvement and re-formulation of model calibrations as statistical inverse problems with the consequent extension of current databases for catalysis and ablation. The model calibrations are posed using the Bayesian formalism where a complete characterization of the posterior probability distributions of selected parameters are computed.The first part of the thesis presents a review of the theoretical models, experiments and numerical codes used to study catalysis and ablation in the context of the von Karman Institute's Plasmatron wind tunnel. This part ends with a summary on the potential uncertainty sources present in both theoretical-numerical and experimental data. Subsequently, the methods used to deal with these uncertainty sources are introduced in detail. The second part of the thesis presents the various original contributions of this thesis. For catalytic materials, an optimal likelihood framework for Bayesian calibration is proposed. This methodology offers a complete uncertainty characterization of catalytic parameters with a decrease of 20\% in the standard deviation with respect to previous works. Building on this framework, a testing strategy which produces the most informative catalysis experiments to date is proposed. Experiments and consequent stochastic analyses are performed, enriching existing catalysis experimental databases for ceramic matrix composites with accurate uncertainty estimations. The last contribution deals with the re-formulation of the inference problem for nitridation reaction efficiencies of a graphite ablative material from plasma wind tunnel data. This is the first contribution in the literature where different measurements of the same flowfield are used jointly to assess their consistency and the resulting ablation parameters. An Arrhenius law is calibrated using all available data, extending the range of conditions to lower surface temperatures where no account of reliable experimental data is found. Epistemic uncertainties affecting the model definition and ablative wall conditions are gauged through various hypothesis testing studies. The final account on the nitridation reaction efficiency uncertainties is given by averaging the results obtained under the different models.This thesis highlights the fact that the process of scientific inference can also carry deep assumptions about the nature of the problem and it can impact how researchers reach conclusions about their work. Ultimately, this thesis contributes to the early efforts of introducing accurate and rigorous uncertainty quantification techniques in atmospheric entry research. The methodologies here presented go in line with developing predictive models with estimated confidence levels.L'étude des phénomènes d'interaction gaz-surface pour les véhicules d'entrée atmosphérique est basée sur le développement de modèles théoriques prédictifs et sur les capacités des installations expérimentales actuelles. Toutefois, en raison de la complexité de la physique et des divers phénomènes qui doivent être étudiés dans ces installations, les simulations tant numériques qu'expérimentales génèrent des données qui présentent des incertitudes. Cependant, il est courant dans le domaine de l'aérothermodynamique de recourir à des méthodes de calibration et de validation non adaptées à un traitement rigoureux de ces incertitudes.Cette thèse étudie le processus d'inférence scientifique et ses ramifications dans certaines expériences d'interaction gaz-surface. Ses principales contributions sont l'amélioration et la reformulation de la calibration de modèles en tant que problème statistique inverse et l'extension résultante des bases de données actuelles pour la catalyse et l'ablation. La calibration des modèles utilise le formalisme Bayésien où la caractérisation complète des distributions de probabilités postérieures des paramètres sélectionnés est calculée.La première partie de la thèse présente une revue des modèles théoriques, des expériences et des codes de simulation numérique utilisés pour étudier la catalyse et l'ablation dans le Plasmatron, la soufflerie à plasma de l'Institut von Karman. Cette partie se termine par un résumé des sources possibles d'incertitude présentes dans les données théoriques-numériques et expérimentales. Ensuite, les méthodes utilisées pour traiter mathématiquement ces sources d'incertitude sont présentées en détail.La deuxième partie présente les différentes contributions originales de cette thèse. Pour les matériaux catalytiques, une méthodologie de vraisemblance optimale pour l'inférence Bayésienne est développée. Cette méthodologie offre une caractérisation complète de l'incertitude des paramètres catalytiques avec une diminution de 20\% de l'écart type par rapport aux travaux antérieurs. En utilisant cette méthodologie, une stratégie de test produisant les données expérimentales de catalyse les plus informatives à ce jour est proposée. Ensuite, des expériences et des analyses stochastiques sont effectuées, enrichissant les bases de données expérimentales de catalyse existantes pour les composés à matrice céramique à l'aide d'estimations précises de l'incertitude.La dernière contribution est la reformulation du problème d'inférence des efficacités de réaction de l'azote à la surface d'un matériau ablatif en graphite à partir des données de soufflerie à plasma. Il s'agit de la première étude dans la litérature où différentes observations de la même expérience sont utilisées ensemble pour évaluer leur cohérence et les paramètres d'ablation qui en résultent. Une loi d'Arrhenius stochastique est déduite en utilisant toutes les données disponibles, étendant la gamme de conditions à des températures de surface plus basses, là où il n'y a pas de données expérimentales fiables. L'incertitude épistémique qui affecte la définition du modèle et les conditions aux limites d'ablation sont étudiées par des méthodes de test d'hypothèses. L'incertitude finale sur l'efficacité de la réaction azotée est obtenue en moyennant les résultats obtenus avec les différents modèles.Cette thèse met en évidence que le processus d'inférence scientifique peut également imposer des hypothèses sur la nature du problème et avoir un impact sur la manière dont les chercheurs parviennent à des conclusions sur leur travail. En fin de compte, cette thèse contribue aux premiers efforts d'introduction de techniques précises et rigoureuses de quantification de l'incertitude dans le domaine de recherche de l'entrée atmosphérique. Les méthodologies présentées ici permettront in fine le développement de modèles prédictifs avec estimation de niveaux de confiance

    Optimal Data Split Methodology for Model Validation

    Full text link
    The decision to incorporate cross-validation into validation processes of mathematical models raises an immediate question - how should one partition the data into calibration and validation sets? We answer this question systematically: we present an algorithm to find the optimal partition of the data subject to certain constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to predictions of a given quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This framework also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decision-maker, who understands and can quantify the cost of model failure; and the computational scientists, who strive to determine if the model satisfies both the modeler's and decision maker's requirements. We also note that our framework is quite general, and may be applied to a wide range of problems. Here, we illustrate it through a specific example involving a data reduction model for an ICCD camera from a shock-tube experiment located at the NASA Ames Research Center (ARC).Comment: Submitted to International Conference on Modeling, Simulation and Control 2011 (ICMSC'11), San Francisco, USA, 19-21 October, 201

    Bayesian information-theoretic calibration of patient-specific radiotherapy sensitivity parameters for informing effective scanning protocols in cancer

    Full text link
    With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients' parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of nn high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget

    GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN

    Get PDF
    Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this context is the expected information gain in model parameters, which in general can only be estimated using Monte Carlo methods. Maximizing this objective thus becomes a stochastic optimization problem. This paper develops gradient-based stochastic optimization methods for the design of experiments on a continuous parameter space. Given a Monte Carlo estimator of expected information gain, we use infinitesimal perturbation analysis to derive gradients of this estimator.We are then able to formulate two gradient-based stochastic optimization approaches: (i) Robbins-Monro stochastic approximation, and (ii) sample average approximation combined with a deterministic quasi-Newton method. A polynomial chaos approximation of the forward model accelerates objective and gradient evaluations in both cases.We discuss the implementation of these optimization methods, then conduct an empirical comparison of their performance. To demonstrate design in a nonlinear setting with partial differential equation forward models, we use the problem of sensor placement for source inversion. Numerical results yield useful guidelines on the choice of algorithm and sample sizes, assess the impact of estimator bias, and quantify tradeoffs of computational cost versus solution quality and robustness.United States. Air Force Office of Scientific Research (Computational Mathematics Program)National Science Foundation (U.S.) (Award ECCS-1128147

    Simulation-based optimal Bayesian experimental design for nonlinear systems

    Get PDF
    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics.Comment: Preprint 53 pages, 17 figures (54 small figures). v1 submitted to the Journal of Computational Physics on August 4, 2011; v2 submitted on August 12, 2012. v2 changes: (a) addition of Appendix B and Figure 17 to address the bias in the expected utility estimator; (b) minor language edits; v3 submitted on November 30, 2012. v3 changes: minor edit
    corecore