5 research outputs found

    Adaptive selection of sampling points for uncertainty quantification

    Get PDF
    We present a simple and robust strategy for the selection of sampling points in Uncertainty Quantification. The goal is to achieve the fastest possible convergence in the cumulative distribution function of a stochastic output of interest. We assume that the output of interest is the outcome of a computationally expensive nonlinear mapping of an input random variable, whose probability density function is known. We use a radial function basis to construct an accurate interpolant of the mapping. This strategy enables adding new sampling points one at a time, adaptively. This takes into full account the previous evaluations of the target nonlinear function. We present comparisons with a stochastic collocation method based on the Clenshaw-Curtis quadrature rule, and with an adaptive method based on hierarchical surplus, showing that the new method often results in a large computational saving.Comment: 22 pages, 15 figures; to appear in Int. J. Uncertainty Quantificatio

    Probabilistic Godunov-type hydrodynamic modelling under multiple uncertainties: robust wavelet-based formulations

    Get PDF
    Intrusive stochastic Galerkin methods propagate uncertainties in a single model run, eliminating repeated sampling required by conventional Monte Carlo methods. However, an intrusive formulation has yet to be developed for probabilistic hydrodynamic modelling incorporating robust wetting-and-drying and stable friction integration under joint uncertainties in topography, roughness, and inflow. Robustness measures are well-developed in deterministic models, but rely on local, nonlinear operations that can introduce additional stochastic errors that destabilise an intrusive model. This paper formulates an intrusive hydrodynamic model using a multidimensional tensor product of Haar wavelets to capture fine-scale variations in joint probability distributions and extend the validity of robustness measures from the underlying deterministic discretisation. Probabilistic numerical tests are designed to verify intrusive model robustness, and compare accuracy and efficiency against a conventional Monte Carlo approach and two other alternatives: a nonintrusive stochastic collocation formulation sharing the same tensor product wavelet basis, and an intrusive formulation that truncates the basis to gain efficiency under multiple uncertainties. Tests reveal that: (i) a full tensor product basis is required to preserve intrusive model robustness, while the nonintrusive counterpart achieves identically accurate results at a reduced computational cost; and, (ii) Haar wavelets basis requires at least three levels of refinements per uncertainty dimension to reliably capture complex probability distributions. Accompanying model software and simulation data are openly available online
    corecore