5,366 research outputs found

    The aerospace plane design challenge: Credible computational fluid dynamics results

    Get PDF
    Computational fluid dynamics (CFD) is necessary in the design processes of all current aerospace plane programs. Single-stage-to-orbit (STTO) aerospace planes with air-breathing supersonic combustion are going to be largely designed by means of CFD. The challenge of the aerospace plane design is to provide credible CFD results to work from, to assess the risk associated with the use of those results, and to certify CFD codes that produce credible results. To establish the credibility of CFD results used in design, the following topics are discussed: CFD validation vis-a-vis measurable fluid dynamics (MFD) validation; responsibility for credibility; credibility requirement; and a guide for establishing credibility. Quantification of CFD uncertainties helps to assess success risk and safety risks, and the development of CFD as a design tool requires code certification. This challenge is managed by designing the designers to use CFD effectively, by ensuring quality control, and by balancing the design process. For designing the designers, the following topics are discussed: how CFD design technology is developed; the reasons Japanese companies, by and large, produce goods of higher quality than the U.S. counterparts; teamwork as a new way of doing business; and how ideas, quality, and teaming can be brought together. Quality control for reducing the loss imparted to the society begins with the quality of the CFD results used in the design process, and balancing the design process means using a judicious balance of CFD and MFD

    Gaussian Process Regression for In-situ Capacity Estimation of Lithium-ion Batteries

    Full text link
    Accurate on-board capacity estimation is of critical importance in lithium-ion battery applications. Battery charging/discharging often occurs under a constant current load, and hence voltage vs. time measurements under this condition may be accessible in practice. This paper presents a data-driven diagnostic technique, Gaussian Process regression for In-situ Capacity Estimation (GP-ICE), which estimates battery capacity using voltage measurements over short periods of galvanostatic operation. Unlike previous works, GP-ICE does not rely on interpreting the voltage-time data as Incremental Capacity (IC) or Differential Voltage (DV) curves. This overcomes the need to differentiate the voltage-time data (a process which amplifies measurement noise), and the requirement that the range of voltage measurements encompasses the peaks in the IC/DV curves. GP-ICE is applied to two datasets, consisting of 8 and 20 cells respectively. In each case, within certain voltage ranges, as little as 10 seconds of galvanostatic operation enables capacity estimates with approximately 2-3% RMSE.Comment: 12 pages, 10 figures, submitted to IEEE Transactions on Industrial Informatic

    Calibration Probe Uncertainty and Validation for the Hypersonic Material Environmental Test System

    Get PDF
    This paper presents an uncertainty analysis of the stagnation-point calibration probe surface predictions for conditions that span the performance envelope of the Hypersonic Materials Environmental Test System facility located at NASA Langley Research Center. A second-order stochastic expansion was constructed over 47 uncertain parameters to evaluate the sensitivities, identify the most significant uncertain variables, and quantify the uncertainty in the stagnation-point heat flux and pressure predictions of the calibration probe for a low- and high-enthalpy test condition. A sensitivity analysis showed that measurement bias uncertainty is the most significant contributor to the stagnation-point pressure and heat flux variance for the low-enthalpy condition. For the high-enthalpy condition, a paradigm shift in sensitivities revealed the computational fluid dynamics model input uncertainty as the main contributor. A comparison between the prediction and measurement of the stagnation-point conditions under uncertainty showed that there was evidence of statistical disagreement. A validation metric was proposed and applied to the prediction uncertainty to account for the statistical disagreement when compared to the possible stagnation-point heat flux and pressure measurements

    Distribution-free stochastic simulation methodology for model updating under hybrid uncertainties

    Get PDF
    In the real world, a significant challenge faced in the safe operation and maintenance of infrastructures is the lack of available information or data. This results in a large degree of uncertainty and the requirement for robust and efficient uncertainty quantification (UQ) tools in order to derive the most realistic estimates of the behavior of structures. While the probabilistic approach has long been utilized as an essential tool for the quantitative mathematical representation of uncertainty, a common criticism is that the approach often involves insubstantiated subjective assumptions because of the scarcity or imprecision of available information. To avoid the inclusion of subjectivity, the concepts of imprecise probabilities have been developed, and the distributional probability-box (p-box) has gained the most attention among various types of imprecise probability models since it can straightforwardly provide a clear separation between aleatory and epistemic uncertainty. This thesis concerns the realistic consideration and numerically efficient calibraiton and propagation of aleatory and epistemic uncertainties (hybrid uncertainties) based on the distributional p-box. The recent developments including the Bhattacharyya distance-based approximate Bayesian computation (ABC) and non-intrusive imprecise stochastic simulation (NISS) methods have strengthened the subjective assumption-free approach for uncertainty calibration and propagation. However, these methods based on the distributional p-box stand on the availability of the prior knowledge determining a specific distribution family for the p-box. The target of this thesis is hence to develop a distribution-free approach for the calibraiton and propagation of hybrid uncertainties, strengthening the subjective assumption-free UQ approach. To achieve the above target, this thesis presents five main developments to improve the Bhattacharyya distance-based ABC and NISS frameworks. The first development is on improving the scope of application and efficiency of the Bhattacharyya distance-based ABC. The dimension reduction procedure is proposed to evaluate the Bhattacharyya distance when the system under investigation is described by time-domain sequences. Moreover, the efficient Bayesian inference method within the Bayesian updating with structural reliability methods (BUS) framework is developed by combining BUS with the adaptive Kriging-based reliability method, namely AK-MCMC. The second development of the distribution-free stochastic model updating framework is based on the combined application of the staircase density functions and the Bhattacharyya distance. The staircase density functions can approximate a wide range of distributions arbitrarily close; hence the development achieved to perform the Bhattacharyya distance-based ABC without limiting hypotheses on the distribution families of the parameters having to be updated. The aforementioned two developments are then integrated in the third development to provide a solution to the latest edition (2019) of the NASA UQ challenge problem. The model updating tasks under very challenging condition, where prior information of aleatory parameters are extremely limited other than a common boundary, are successfully addressed based on the above distribution-free stochastic model updating framework. Moreover, the NISS approach that simplifies the high-dimensional optimization to a set of one-dimensional searching by a first-order high-dimensional model representation (HDMR) decomposition with respect to each design parameter is developed to efficiently solve the reliability-based design optimization tasks. This challenge, at the same time, elucidates the limitations of the current developments, hence the fourth development aims at addressing the limitation that the staircase density functions are designed for univariate random variables and cannot acount for the parameter dependencies. In order to calibrate the joint distribution of correlated parameters, the distribution-free stochastic model updating framework is extended by characterizing the aleatory parameters using the Gaussian copula functions having marginal distributions as the staircase density functions. This further strengthens the assumption-free approach for uncertainty calibration in which no prior information of the parameter dependencies is required. Finally, the fifth development of the distribution-free uncertainty propagation framework is based on another application of the staircase density functions to the NISS class of methods, and it is applied for efficiently solving the reliability analysis subproblem of the NASA UQ challenge 2019. The above five developments have successfully strengthened the assumption-free approach for both uncertainty calibration and propagation thanks to the nature of the staircase density functions approximating arbitrary distributions. The efficiency and effectiveness of those developments are sufficiently demonstrated upon the real-world applications including the NASA UQ challenge 2019

    Stochastic Model Updating with Uncertainty Quantification: An Overview and Tutorial

    Get PDF
    This paper presents an overview of the theoretic framework of stochastic model updating, including critical aspects of model parameterisation, sensitivity analysis, surrogate modelling, test-analysis correlation, parameter calibration, etc. Special attention is paid to uncertainty analysis, which extends model updating from the deterministic domain to the stochastic domain. This extension is significantly promoted by uncertainty quantification metrics, no longer describing the model parameters as unknown-but-fixed constants but random variables with uncertain distributions, i.e. imprecise probabilities. As a result, the stochastic model updating no longer aims at a single model prediction with maximum fidelity to a single experiment, but rather a reduced uncertainty space of the simulation enveloping the complete scatter of multiple experiment data. Quantification of such an imprecise probability requires a dedicated uncertainty propagation process to investigate how the uncertainty space of the input is propagated via the model to the uncertainty space of the output. The two key aspects, forward uncertainty propagation and inverse parameter calibration, along with key techniques such as P-box propagation, statistical distance-based metrics, Markov chain Monte Carlo sampling, and Bayesian updating, are elaborated in this tutorial. The overall technical framework is demonstrated by solving the NASA Multidisciplinary UQ Challenge 2014, with the purpose of encouraging the readers to reproduce the result following this tutorial. The second practical demonstration is performed on a newly designed benchmark testbed, where a series of lab-scale aeroplane models are manufactured with varying geometry sizes, following pre-defined probabilistic distributions, and tested in terms of their natural frequencies and model shapes. Such a measurement database contains naturally not only measurement errors but also, more importantly, controllable uncertainties from the pre-defined distributions of the structure geometry. Finally, open questions are discussed to fulfil the motivation of this tutorial in providing researchers, especially beginners, with further directions on stochastic model updating with uncertainty treatment perspectives

    Computational methods for system optimization under uncertainty

    Get PDF
    In this paper, Subproblems A, B and C of the NASA Langley Uncertainty Quantification (UQ) Challenge on Optimization Under Uncertainty are addressed. Subproblem A deals with the model calibration and (aleatory and epistemic) uncertainty quantification of a subsystem, where a characterization of the parameters of the subsystem is sought by resorting to a limited number (100) of observations. Bayesian inversion is here proposed to address this task. Subproblem B requires the identification and ranking of those (epistemic) parameters that are more effective in improving the predictive ability of the computational model of the subsystem (and, thus, that deserve a refinement in their uncertainty model). Two approaches are here compared: the first is based on a sensitivity analysis within a factor prioritization setting, whereas the second employs the Energy Score (ES) as a multivariate generalization of the Continuous Rank Predictive Score (CRPS). Since the output of the subsystem is a function of time, both subproblems are addressed in the space defined by the orthonormal bases resulting from a Singular Value Decomposition (SVD) of the subsystem observations: in other words, a multivariate dynamic problem in the real domain is translated into a multivariate static problem in the SVD space. Finally, Subproblem C requires identifying the (epistemic) reliability (resp., failure probability) bounds of a given system design point. The issue is addressed by an efficient combination of: (i) Monte Carlo Simulation (MCS) to propagate the aleatory uncertainty described by probability distributions; and (ii) Genetic Algorithms (GAs) to solve the optimization problems related to the propagation of epistemic uncertainty by interval analysis

    Optimal Data Split Methodology for Model Validation

    Full text link
    The decision to incorporate cross-validation into validation processes of mathematical models raises an immediate question - how should one partition the data into calibration and validation sets? We answer this question systematically: we present an algorithm to find the optimal partition of the data subject to certain constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to predictions of a given quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This framework also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decision-maker, who understands and can quantify the cost of model failure; and the computational scientists, who strive to determine if the model satisfies both the modeler's and decision maker's requirements. We also note that our framework is quite general, and may be applied to a wide range of problems. Here, we illustrate it through a specific example involving a data reduction model for an ICCD camera from a shock-tube experiment located at the NASA Ames Research Center (ARC).Comment: Submitted to International Conference on Modeling, Simulation and Control 2011 (ICMSC'11), San Francisco, USA, 19-21 October, 201

    A Computationally-Efficient Probabilistic Approach to Model-Based Damage Diagnosis

    Get PDF
    This work presents a computationally-efficient, probabilistic approach to model-based damage diagnosis. Given measurement data, probability distributions of unknown damage parameters are estimated using Bayesian inference and Markov chain Monte Carlo (MCMC) sampling. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. While the formulation is general for arbitrary component geometry, damage type, and sensor data, it is applied to the problem of strain-based crack characterization and experimentally validated using full-field strain data from digital image correlation (DIC). Access to full-field DIC data facilitates the study of the effectiveness of strain-based diagnosis as the distance between the location of damage and strain measurements is varied. The ability of the framework to accurately estimate the crack parameters and effectively capture the uncertainty due to measurement proximity and experimental error is demonstrated. Furthermore, surrogate modeling is shown to enable diagnoses on the order of seconds and minutes rather than several days required with the FE model

    Management of SSME hardware life utilization

    Get PDF
    Statistical and probabilistic reliability methodologies were developed for the determination of hardware life limits for the Space Shuttle Main Engine (SSME). Both methodologies require that a mathematical reliability model of the engine (system) performance be developed as a function of the reliabilities of the components and parts. The system reliability model should be developed from the Failute Modes and Effects Analysis/Critical Items List. The statistical reliability methodology establishes hardware life limits directly from the failure distributions of the components and parts obtained from statistically-designed testing. The probabilistic reliability methodology establishes hardware life limits from a decision analysis methodology which incorporates the component/part reliabilities obtained from a probabilistic structural analysis, a calibrated maintenance program, inspection techniques, and fabrication procedures. Probilistic structural analysis is recommended as a tool to prioritize upgrading of the components and parts. The Weibull probability distribution is presently being investigated by NASA/MSFC to characterize the failure distribution of the SSME hardware from a limited data base of failures
    • …
    corecore