1,664 research outputs found

    Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins

    Get PDF
    In this work we propose an Uncertainty Quantification methodology for sedimentary basins evolution under mechanical and geochemical compaction processes, which we model as a coupled, time-dependent, non-linear, monodimensional (depth-only) system of PDEs with uncertain parameters. While in previous works (Formaggia et al. 2013, Porta et al., 2014) we assumed a simplified depositional history with only one material, in this work we consider multi-layered basins, in which each layer is characterized by a different material, and hence by different properties. This setting requires several improvements with respect to our earlier works, both concerning the deterministic solver and the stochastic discretization. On the deterministic side, we replace the previous fixed-point iterative solver with a more efficient Newton solver at each step of the time-discretization. On the stochastic side, the multi-layered structure gives rise to discontinuities in the dependence of the state variables on the uncertain parameters, that need an appropriate treatment for surrogate modeling techniques, such as sparse grids, to be effective. We propose an innovative methodology to this end which relies on a change of coordinate system to align the discontinuities of the target function within the random parameter space. The reference coordinate system is built upon exploiting physical features of the problem at hand. We employ the locations of material interfaces, which display a smooth dependence on the random parameters and are therefore amenable to sparse grid polynomial approximations. We showcase the capabilities of our numerical methodologies through two synthetic test cases. In particular, we show that our methodology reproduces with high accuracy multi-modal probability density functions displayed by target state variables (e.g., porosity).Comment: 25 pages, 30 figure

    Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario

    Full text link
    A variety of methods is available to quantify uncertainties arising with\-in the modeling of flow and transport in carbon dioxide storage, but there is a lack of thorough comparisons. Usually, raw data from such storage sites can hardly be described by theoretical statistical distributions since only very limited data is available. Hence, exact information on distribution shapes for all uncertain parameters is very rare in realistic applications. We discuss and compare four different methods tested for data-driven uncertainty quantification based on a benchmark scenario of carbon dioxide storage. In the benchmark, for which we provide data and code, carbon dioxide is injected into a saline aquifer modeled by the nonlinear capillarity-free fractional flow formulation for two incompressible fluid phases, namely carbon dioxide and brine. To cover different aspects of uncertainty quantification, we incorporate various sources of uncertainty such as uncertainty of boundary conditions, of conceptual model definitions and of material properties. We consider recent versions of the following non-intrusive and intrusive uncertainty quantification methods: arbitary polynomial chaos, spatially adaptive sparse grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The performance of each approach is demonstrated assessing expectation value and standard deviation of the carbon dioxide saturation against a reference statistic based on Monte Carlo sampling. We compare the convergence of all methods reporting on accuracy with respect to the number of model runs and resolution. Finally we offer suggestions about the methods' advantages and disadvantages that can guide the modeler for uncertainty quantification in carbon dioxide storage and beyond

    Cluster, Classify, Regress: A General Method For Learning Discountinous Functions

    Full text link
    This paper presents a method for solving the supervised learning problem in which the output is highly nonlinear and discontinuous. It is proposed to solve this problem in three stages: (i) cluster the pairs of input-output data points, resulting in a label for each point; (ii) classify the data, where the corresponding label is the output; and finally (iii) perform one separate regression for each class, where the training data corresponds to the subset of the original input-output pairs which have that label according to the classifier. It has not yet been proposed to combine these 3 fundamental building blocks of machine learning in this simple and powerful fashion. This can be viewed as a form of deep learning, where any of the intermediate layers can itself be deep. The utility and robustness of the methodology is illustrated on some toy problems, including one example problem arising from simulation of plasma fusion in a tokamak.Comment: 12 files,6 figure

    An adaptive minimum spanning tree multi-element method for uncertainty quantification of smooth and discontinuous responses

    Get PDF
    A novel approach for non-intrusive uncertainty propagation is proposed. Our approach overcomes the limitation of many traditional methods, such as generalised polynomial chaos methods, which may lack sufficient accuracy when the quantity of interest depends discontinuously on the input parameters. As a remedy we propose an adaptive sampling algorithm based on minimum spanning trees combined with a domain decomposition method based on support vector machines. The minimum spanning tree determines new sample locations based on both the probability density of the input parameters and the gradient in the quantity of interest. The support vector machine efficiently decomposes the random space in multiple elements, avoiding the appearance of Gibbs phenomena near discontinuities. On each element, local approximations are constructed by means of least orthogonal interpolation, in order to produce stable interpolation on the unstructured sample set. The resulting minimum spanning tree multi-element method does not require initial knowledge of the behaviour of the quantity of interest and automatically detects whether discontinuities are present. We present several numerical examples that demonstrate accuracy, efficiency and generality of the method.Comment: 20 pages, 18 figure

    Numerical smoothing with hierarchical adaptive sparse grids and quasi-Monte Carlo methods for efficient option pricing

    Get PDF
    When approximating the expectation of a functional of a stochastic process, the efficiency and performance of deterministic quadrature methods, such as sparse grid quadrature and quasi-Monte Carlo (QMC) methods, may critically depend on the regularity of the integrand. To overcome this issue and reveal the available regularity, we consider cases in which analytic smoothing cannot be performed, and introduce a novel numerical smoothing approach by combining a root finding algorithm with one-dimensional integration with respect to a single well-selected variable. We prove that under appropriate conditions, the resulting function of the remaining variables is a highly smooth function, potentially affording the improved efficiency of adaptive sparse grid quadrature (ASGQ) and QMC methods, particularly when combined with hierarchical transformations (i.e., Brownian bridge and Richardson extrapolation on the weak error). This approach facilitates the effective treatment of high dimensionality. Our study is motivated by option pricing problems, and our focus is on dynamics where the discretization of the asset price is necessary. Based on our analysis and numerical experiments, we show the advantages of combining numerical smoothing with the ASGQ and QMC methods over ASGQ and QMC methods without smoothing and the Monte Carlo approach

    Efficient Localization of Discontinuities in Complex Computational Simulations

    Full text link
    Surrogate models for computational simulations are input-output approximations that allow computationally intensive analyses, such as uncertainty propagation and inference, to be performed efficiently. When a simulation output does not depend smoothly on its inputs, the error and convergence rate of many approximation methods deteriorate substantially. This paper details a method for efficiently localizing discontinuities in the input parameter domain, so that the model output can be approximated as a piecewise smooth function. The approach comprises an initialization phase, which uses polynomial annihilation to assign function values to different regions and thus seed an automated labeling procedure, followed by a refinement phase that adaptively updates a kernel support vector machine representation of the separating surface via active learning. The overall approach avoids structured grids and exploits any available simplicity in the geometry of the separating surface, thus reducing the number of model evaluations required to localize the discontinuity. The method is illustrated on examples of up to eleven dimensions, including algebraic models and ODE/PDE systems, and demonstrates improved scaling and efficiency over other discontinuity localization approaches
    corecore