5 research outputs found

    Surrogate modelling and uncertainty quantification based on multi-fidelity deep neural network

    Full text link
    To reduce training costs, several Deep neural networks (DNNs) that can learn from a small set of HF data and a sufficient number of low-fidelity (LF) data have been proposed. In these established neural networks, a parallel structure is commonly proposed to separately approximate the non-linear and linear correlation between the HF- and LF data. In this paper, a new architecture of multi-fidelity deep neural network (MF-DNN) was proposed where one subnetwork was built to approximate both the non-linear and linear correlation simultaneously. Rather than manually allocating the output weights for the paralleled linear and nonlinear correction networks, the proposed MF-DNN can autonomously learn arbitrary correlation. The prediction accuracy of the proposed MF-DNN was firstly demonstrated by approximating the 1-, 32- and 100-dimensional benchmark functions with either the linear or non-linear correlation. The surrogating modelling results revealed that MF-DNN exhibited excellent approximation capabilities for the test functions. Subsequently, the MF DNN was deployed to simulate the 1-, 32- and 100-dimensional aleatory uncertainty propagation progress with the influence of either the uniform or Gaussian distributions of input uncertainties. The uncertainty quantification (UQ) results validated that the MF-DNN efficiently predicted the probability density distributions of quantities of interest (QoI) as well as the statistical moments without significant compromise of accuracy. MF-DNN was also deployed to model the physical flow of turbine vane LS89. The distributions of isentropic Mach number were well-predicted by MF-DNN based on the 2D Euler flow field and few experimental measurement data points. The proposed MF-DNN should be promising in solving UQ and robust optimization problems in practical engineering applications with multi-fidelity data sources

    Multi-fidelity modeling with different input domain definitions using Deep Gaussian Processes

    Full text link
    Multi-fidelity approaches combine different models built on a scarce but accurate data-set (high-fidelity data-set), and a large but approximate one (low-fidelity data-set) in order to improve the prediction accuracy. Gaussian Processes (GPs) are one of the popular approaches to exhibit the correlations between these different fidelity levels. Deep Gaussian Processes (DGPs) that are functional compositions of GPs have also been adapted to multi-fidelity using the Multi-Fidelity Deep Gaussian process model (MF-DGP). This model increases the expressive power compared to GPs by considering non-linear correlations between fidelities within a Bayesian framework. However, these multi-fidelity methods consider only the case where the inputs of the different fidelity models are defined over the same domain of definition (e.g., same variables, same dimensions). However, due to simplification in the modeling of the low-fidelity, some variables may be omitted or a different parametrization may be used compared to the high-fidelity model. In this paper, Deep Gaussian Processes for multi-fidelity (MF-DGP) are extended to the case where a different parametrization is used for each fidelity. The performance of the proposed multifidelity modeling technique is assessed on analytical test cases and on structural and aerodynamic real physical problems
    corecore