114 research outputs found

    Generalized Spectral Decomposition for Stochastic Non Linear Problems

    Get PDF
    International audienceWe present an extension of the Generalized Spectral Decomposition method for the resolution of non-linear stochastic problems. The method consists in the construction of a reduced basis approximation of the Galerkin solution and is independent of the stochastic discretization selected (polynomial chaos, stochastic multi-element or multiwavelets). Two algorithms are proposed for the sequential construction of the successive generalized spectral modes. They involve decoupled resolutions of a series of deterministic and low dimensional stochastic problems. Compared to the classical Galerkin method, the algorithms allow for significant computational savings and require minor adaptations of the deterministic codes. The methodology is detailed and tested on two model problems, the one-dimensional steady viscous Burgers equation and a two-dimensional non-linear diffusion problem. These examples demonstrate the effectiveness of the proposed algorithms which exhibit convergence rates with the number of modes essentially dependent on the spectrum of the stochastic solution but independent of the dimension of the stochastic approximation space

    Adaptive Anisotropic Spectral Stochastic Methods for Uncertain Scalar Conservation Laws

    Get PDF
    This paper deals with the design of adaptive anisotropic discretization schemes for conservation laws with stochastic parameters. A Finite Volume scheme is used for the deterministic discretization, while a piecewise polynomial representation is used at the stochastic level. The methodology is designed in the context of intrusive Galerkin projection methods with Roe-type solver. The adaptation aims at selecting the stochastic resolution level based on the local smoothness of the solution in the stochastic domain. In addition, the stochastic features of the solution greatly vary in the space and time so that the constructed stochastic approximation space depends on space and time. The dynamically evolving stochastic discretization uses a tree-structure representation that allows for the efficient implementation of the various operators needed to perform anisotropic multiresolution analysis. Efficiency of the overall adaptive scheme is assessed on the stochastic traffic equation with uncertain initial conditions and velocity leading to expansion waves and shocks that propagate with random velocities. Numerical tests highlight the computational savings achieved as well as the benefit of using anisotropic discretizations in view of dealing with problems involving a larger number of stochastic parameters

    Study of overland flow with uncertain infiltration using stochastic tools

    Get PDF
    The saturated hydraulic conductivity is one of the key parameters in the modelling of overland flow water fluxes. In this study, this parameter is defined as a stochastic parameter, idealized as a piecewise constant random field with uniform distribution. This paper aims at investigating the effects of the spatial and temporal scales in uncertainty propagation within overland flow models, and at identifying the localization of the most influential saturated hydraulic conductivity using sensitivity analysis. The results show that the influence of saturated hydraulic conductivity depends on the soil saturation and its spatial localization. For instance, in case of low saturated soils, the most influent parameter is the one located downslope, whereas in case of high saturated soils, the most influent one is either the most infiltrating or the intermediate one. The results indicate where efforts should be concentrate when collecting input parameters to reduce modelling uncertainties

    Roe Solver with Entropy Corrector for Uncertain Hyperbolic Systems

    Get PDF
    International audienceThis paper deals with intrusive Galerkin projection methods with Roe-type solver for uncertain hyperbolic systems using a finite volume discretization in physical space and a piecewise continuous representation at the stochastic level. The aim of this paper is to design a cost-effective adaptation of the deterministic Dubois and Mehlman corrector to avoid entropy-violating shocks in the presence of sonic points. The adaptation relies on an estimate of the eigenvalues and eigenvectors of the Galerkin Jacobian matrix of the deterministic system of the stochastic modes of the solution and on a correspondence between these approximate eigenvalues and eigenvectors for the intermediate states considered at the interface. Some indicators are derived to decide where a correction is needed, thereby reducing considerably the computational costs. The effectiveness of the proposed corrector is assessed on the Burgers and Euler equations including sonic points

    Numerical approximation of poroelasticity with random coefficients using Polynomial Chaos and Hybrid High-Order methods

    Get PDF
    In this work, we consider the Biot problem with uncertain poroelastic coefficients. The uncertainty is modelled using a finite set of parameters with prescribed probability distribution. We present the variational formulation of the stochastic partial differential system and establish its well-posedness. We then discuss the approximation of the parameter-dependent problem by non-intrusive techniques based on Polynomial Chaos decompositions. We specifically focus on sparse spectral projection methods, which essentially amount to performing an ensemble of deterministic model simulations to estimate the expansion coefficients. The deterministic solver is based on a Hybrid High-Order discretization supporting general polyhedral meshes and arbitrary approximation orders. We numerically investigate the convergence of the probability error of the Polynomial Chaos approximation with respect to the level of the sparse grid. Finally, we assess the propagation of the input uncertainty onto the solution considering an injection-extraction problem

    Model reduction based on proper generalized decomposition for the steady incompressible Navier-Stokes equations

    Get PDF
    In this paper we consider a Proper Generalized Decomposition method to solve the steady incompressible Navier–Stokes equations with random Reynolds number and forcing term. The aim of such technique is to compute a low-cost reduced basis approximation of the full Stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of pre-existing deterministic Navier–Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of a m-dimensional reduced basis rather than M coupled problems of the full Stochastic Galerkin approximation space, with m << M (up to one order of magnitude for the problem at hand in this work)

    Bayesian Inference of Model Error in Imprecise Models

    Get PDF
    International audienceModern science makes use of computer models to reproduce and predict complex physical systems. Every model involves parameters, which can be measured experimentally (e.g., mass of a solid), or not (e.g., coefficients in the k − Δ turbulence model). The latter parameters can be inferred from experimental data, through a procedure called calibration of the computer model. However, some models may not be able to represent reality accurately, due to their limited structure : this is the definition of model error. The "best value" of the parameters of a model is traditionnally defined as the best fit to the data. It depends on the experiment, the quantities of interest considered, and also on the supposed underlying statistical structure of the error. Bayesian methods allow the calibration of the model by taking into account its error. The fit to the data is balanced with the complexity of the model, following Occam's principle. Kennedy and O'Hagan's innovative method [1] to represent model error with a Gaussian process is a reference in this field. Recently, Tuo and Wu [3] proposed a frequentist addition to this method, to deal with the identifiability problem between model error and calibration error. Plumlee [2] applied the method to simple situations and demonstrated the potential of the approach. In this work, we compare Kennedy and O'Hagan's method with its frequentist version, which involves an optimization problem, on several numerical examples with varying degrees of model error. The calibration provides estimates of the model parameters and model predictions, while also inferring model error within observed and not observed parts of the experimental design space. The case of non-linear costly computer models is also considered, and we propose a new algorithm to reduce the numerical complexity of Bayesian calibration techniques

    A Bayesian Approach for Quantile Optimization Problems with High-Dimensional Uncertainty Sources

    Get PDF
    International audienceRobust optimization strategies typically aim at minimizing some statistics of the uncertain objective function and can be expensive to solve when the statistic is costly to estimate at each design point. Surrogate models of the uncertain objective function can be used to reduce this computational cost. However, such surrogate approaches classically require a low-dimensional parametrization of the uncertainties, limiting their applicability. This work concentrates on the minimization of the quantile and the direct construction of a quantile regression model over the design space, from a limited number of training samples. A Bayesian quantile regression procedure is employed to construct the full posterior distribution of the quantile model. Sampling this distribution, we can assess the estimation error and adjust the complexity of the regression model to the available data. The Bayesian regression is embedded in a Bayesian optimization procedure, which generates sequentially new samples to improve the determination of the minimum of the quantile. Specifically, the sample infill strategy uses optimal points of a sample set of the quantile estimator. The optimization method is tested on simple analytical functions to demonstrate its convergence to the global optimum. The robust design of an airfoil’s shock control bump under high-dimensional geometrical and operational uncertainties serves to demonstrate the capability of the method to handle problems with industrial relevance. Finally, we provide recommendations for future developments and improvements of the method

    Experimental and numerical trimming optimizations for a mainsail in upwind conditions

    Get PDF
    This paper investigates the use of meta-models for optimizing sails trimming. A Gaussian process is used to robustly approximate the dependence of the performance with the trimming parameters to be optimized. The Gaussian process construction uses a limited number of performance observations at carefully selected trimming points, potentially enabling the optimization of complex sail systems with multiple trimming parameters. We test the optimization procedure on the (two parameters) trimming of a scaled IMOCA mainsail in upwind conditions. To assess the robustness of the Gaussian process approach, in particular its sensitivity to error and noise in the performance estimation, we contrast the direct optimization of the physical system with the optimization of its numerical model. For the physical system, the optimization procedure was fed with wind tunnel measurements, while the numerical modeling relied on a fully non-linear Fluid-Structure Interaction solver. The results show a correct agreement of the optimized trimming parameters for the physical and numerical models, despite the inherent errors in the numerical model and the measurement uncertainties. In addition, the number of performance estimations was found to be affordable and comparable in the two cases, demonstrating the effectiveness of the approach
    • 

    corecore