1,053 research outputs found
Numerical approximation of statistical solutions of scalar conservation laws
We propose efficient numerical algorithms for approximating statistical
solutions of scalar conservation laws. The proposed algorithms combine finite
volume spatio-temporal approximations with Monte Carlo and multi-level Monte
Carlo discretizations of the probability space. Both sets of methods are proved
to converge to the entropy statistical solution. We also prove that there is a
considerable gain in efficiency resulting from the multi-level Monte Carlo
method over the standard Monte Carlo method. Numerical experiments illustrating
the ability of both methods to accurately compute multi-point statistical
quantities of interest are also presented
Numerical approximation of statistical solutions of scalar conservation laws
We propose efficient numerical algorithms for approximating statistical
solutions of scalar conservation laws. The proposed algorithms combine finite
volume spatio-temporal approximations with Monte Carlo and multi-level Monte
Carlo discretizations of the probability space. Both sets of methods are proved
to converge to the entropy statistical solution. We also prove that there is a
considerable gain in efficiency resulting from the multi-level Monte Carlo
method over the standard Monte Carlo method. Numerical experiments illustrating
the ability of both methods to accurately compute multi-point statistical
quantities of interest are also presented
A posteriori error analysis and adaptive non-intrusive numerical schemes for systems of random conservation laws
In this article we consider one-dimensional random systems of hyperbolic
conservation laws. We first establish existence and uniqueness of random
entropy admissible solutions for initial value problems of conservation laws
which involve random initial data and random flux functions. Based on these
results we present an a posteriori error analysis for a numerical approximation
of the random entropy admissible solution. For the stochastic discretization,
we consider a non-intrusive approach, the Stochastic Collocation method. The
spatio-temporal discretization relies on the Runge--Kutta Discontinuous
Galerkin method. We derive the a posteriori estimator using continuous
reconstructions of the discrete solution. Combined with the relative entropy
stability framework this yields computable error bounds for the entire
space-stochastic discretization error. The estimator admits a splitting into a
stochastic and a deterministic (space-time) part, allowing for a novel
residual-based space-stochastic adaptive mesh refinement algorithm. We conclude
with various numerical examples investigating the scaling properties of the
residuals and illustrating the efficiency of the proposed adaptive algorithm
Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario
A variety of methods is available to quantify uncertainties arising with\-in
the modeling of flow and transport in carbon dioxide storage, but there is a
lack of thorough comparisons. Usually, raw data from such storage sites can
hardly be described by theoretical statistical distributions since only very
limited data is available. Hence, exact information on distribution shapes for
all uncertain parameters is very rare in realistic applications. We discuss and
compare four different methods tested for data-driven uncertainty
quantification based on a benchmark scenario of carbon dioxide storage. In the
benchmark, for which we provide data and code, carbon dioxide is injected into
a saline aquifer modeled by the nonlinear capillarity-free fractional flow
formulation for two incompressible fluid phases, namely carbon dioxide and
brine. To cover different aspects of uncertainty quantification, we incorporate
various sources of uncertainty such as uncertainty of boundary conditions, of
conceptual model definitions and of material properties. We consider recent
versions of the following non-intrusive and intrusive uncertainty
quantification methods: arbitary polynomial chaos, spatially adaptive sparse
grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The
performance of each approach is demonstrated assessing expectation value and
standard deviation of the carbon dioxide saturation against a reference
statistic based on Monte Carlo sampling. We compare the convergence of all
methods reporting on accuracy with respect to the number of model runs and
resolution. Finally we offer suggestions about the methods' advantages and
disadvantages that can guide the modeler for uncertainty quantification in
carbon dioxide storage and beyond
A Fully Parallelized and Budgeted Multi-level Monte Carlo Framework for Partial Differential Equations: From Mathematical Theory to Automated Large-Scale Computations
All collected data on any physical, technical or economical process is subject to uncertainty. By incorporating this uncertainty in the model and propagating it through the system, this data error can be controlled. This makes the predictions of the system more trustworthy and reliable. The multi-level Monte Carlo (MLMC) method has proven to be an effective uncertainty quantification tool, requiring little knowledge about the problem while being highly performant.
In this doctoral thesis we analyse, implement, develop and apply the MLMC method to partial differential equations (PDEs) subject to high-dimensional random input data. We set up a unified framework based on the software M++ to approximate solutions to elliptic and hyperbolic PDEs with a large selection of finite element methods. We combine this setup with a new variant of the MLMC method. In particular, we propose a budgeted MLMC (BMLMC) method which is capable to optimally invest reserved computing resources in order to minimize the model error while exhausting a given computational budget. This is achieved by developing a new parallelism based on a single distributed data structure, employing ideas of the continuation MLMC method and utilizing dynamic programming techniques. The final method is theoretically motivated, analyzed, and numerically well-tested in an automated benchmarking workflow for highly challenging problems like the approximation of wave equations in randomized media
- …