179 research outputs found

    Kernel-based stochastic collocation for the random two-phase Navier-Stokes equations

    Full text link
    In this work, we apply stochastic collocation methods with radial kernel basis functions for an uncertainty quantification of the random incompressible two-phase Navier-Stokes equations. Our approach is non-intrusive and we use the existing fluid dynamics solver NaSt3DGPF to solve the incompressible two-phase Navier-Stokes equation for each given realization. We are able to empirically show that the resulting kernel-based stochastic collocation is highly competitive in this setting and even outperforms some other standard methods

    Robustness analysis of VEGA launcher model based on effective sampling strategy

    Get PDF
    An efficient robustness analysis for the VEGA launch vehicle is essential to minimize the potential system failure during the ascending phase. Monte Carlo sampling method is usually considered as a reliable strategy in industry if the sampling size is large enough. However, due to a large number of uncertainties and a long response time for a single simulation, exploring the entire uncertainties sufficiently through Monte Carlo sampling method is impractical for VEGA launch vehicle. In order to make the robustness analysis more efficient when the number of simulation is limited, the quasi-Monte Carlo(Sobol, Faure, Halton sequence) and heuristic algorithm(Differential Evolution) are proposed. Nevertheless, the reasonable number of samples for simulation is still much smaller than the minimal number of samples for sufficient exploration. To further improve the efficiency of robustness analysis, the redundant uncertainties are sorted out by sensitivity analysis. Only the dominant uncertainties are remained in the robustness analysis. As all samples for simulation are discrete, many uncertainty spaces are not explored with respect to its objective function by sampling or optimization methods. To study these latent information, the meta-model trained by Gaussian Process is introduced. Based on the meta-model, the expected maximum objective value and expected sensitivity of each uncertainties can be analyzed for robustness analysis with much higher efficiency but without loss much accuracy

    Discrepancy-based Inference for Intractable Generative Models using Quasi-Monte Carlo

    Get PDF
    Intractable generative models are models for which the likelihood is unavailable but sampling is possible. Most approaches to parameter inference in this setting require the computation of some discrepancy between the data and the generative model. This is for example the case for minimum distance estimation and approximate Bayesian computation. These approaches require sampling a high number of realisations from the model for different parameter values, which can be a significant challenge when simulating is an expensive operation. In this paper, we propose to enhance this approach by enforcing "sample diversity" in simulations of our models. This will be implemented through the use of quasi-Monte Carlo (QMC) point sets. Our key results are sample complexity bounds which demonstrate that, under smoothness conditions on the generator, QMC can significantly reduce the number of samples required to obtain a given level of accuracy when using three of the most common discrepancies: the maximum mean discrepancy, the Wasserstein distance, and the Sinkhorn divergence. This is complemented by a simulation study which highlights that an improved accuracy is sometimes also possible in some settings which are not covered by the theory.Comment: minor presentation changes and updated reference

    Surrogate Modeling of Aerodynamic Simulations for Multiple Operating Conditions Using Machine Learning

    Get PDF
    International audienceThis paper describes a methodology, called local decomposition method, which aims at building a surrogate model based on steady turbulent aerodynamic fields at multiple operating conditions. The various shapes taken by the aerodynamic fields due to the multiple operation conditions pose real challenges as well as the computational cost of the high-fidelity simulations. The developed strategy mitigates these issues by combining traditional surrogate models and machine learning. The central idea is to separate the solutions with a subsonic behavior from the transonic and high-gradient solutions. First, a shock sensor extracts a feature corresponding to the presence of discontinuities, easing the clustering of the simulations by an unsupervised learning algorithm. Second, a supervised learning algorithm divides the parameter space into subdomains, associated to different flow regimes. Local reduced-order models are built on each subdomain using proper orthogonal decomposition coupled with a multivariate interpolation tool. Finally, an improved resampling technique taking advantage of the subdomain decomposition minimizes the redundancy of sampling. The methodology is assessed on the turbulent two-dimensional flow around the RAE2822 transonic airfoil. It exhibits a significant improvement in terms of prediction accuracy for the developed strategy compared with the classical method of surrogate modeling
    • …
    corecore