69,201 research outputs found
A Toolkit for Generating Scalable Stochastic Multiobjective Test Problems
Real-world optimization problems typically include uncertainties over various aspects of the problem formulation. Some existing algorithms are designed to cope with stochastic multiobjective optimization problems, but in order to benchmark them, a proper framework still needs to be established. This paper presents a novel toolkit that generates scalable, stochastic, multiobjective optimization problems. A stochastic problem is generated by transforming the objective vectors of a given deterministic test problem into random vectors. All random objective vectors are bounded by the feasible objective space, defined by the deterministic problem. Therefore, the global solution for the deterministic problem can also serve as a reference for the stochastic problem. A simple parametric distribution for the random objective vector is defined in a radial coordinate system, allowing for direct control over the dual challenges of convergence towards the true Pareto front and diversity across the front. An example for a stochastic test problem, generated by the toolkit, is provided
A fast Monte-Carlo method with a Reduced Basis of Control Variates applied to Uncertainty Propagation and Bayesian Estimation
The Reduced-Basis Control-Variate Monte-Carlo method was introduced recently
in [S. Boyaval and T. Leli\`evre, CMS, 8 2010] as an improved Monte-Carlo
method, for the fast estimation of many parametrized expected values at many
parameter values. We provide here a more complete analysis of the method
including precise error estimates and convergence results. We also numerically
demonstrate that it can be useful to some parametrized frameworks in
Uncertainty Quantification, in particular (i) the case where the parametrized
expectation is a scalar output of the solution to a Partial Differential
Equation (PDE) with stochastic coefficients (an Uncertainty Propagation
problem), and (ii) the case where the parametrized expectation is the Bayesian
estimator of a scalar output in a similar PDE context. Moreover, in each case,
a PDE has to be solved many times for many values of its coefficients. This is
costly and we also use a reduced basis of PDE solutions like in [S. Boyaval, C.
Le Bris, Nguyen C., Y. Maday and T. Patera, CMAME, 198 2009]. This is the first
combination of various Reduced-Basis ideas to our knowledge, here with a view
to reducing as much as possible the computational cost of a simple approach to
Uncertainty Quantification
Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format
We apply the Tensor Train (TT) decomposition to construct the tensor product
Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic
elliptic diffusion PDE with the stochastic Galerkin discretization, and to
compute some quantities of interest (mean, variance, exceedance probabilities).
We assume that the random diffusion coefficient is given as a smooth
transformation of a Gaussian random field. In this case, the PCE is delivered
by a complicated formula, which lacks an analytic TT representation. To
construct its TT approximation numerically, we develop the new block TT cross
algorithm, a method that computes the whole TT decomposition from a few
evaluations of the PCE formula. The new method is conceptually similar to the
adaptive cross approximation in the TT format, but is more efficient when
several tensors must be stored in the same TT representation, which is the case
for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin
matrix and to compute the solution of the elliptic equation and its
post-processing, staying in the TT format.
We compare our technique with the traditional sparse polynomial chaos and the
Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial
degree is bounded for each random variable independently. This provides higher
accuracy than the sparse polynomial set or the Monte Carlo method, but the
cardinality of the tensor product set grows exponentially with the number of
random variables. However, when the PCE coefficients are implicitly
approximated in the TT format, the computations with the full tensor product
polynomial set become possible. In the numerical experiments, we confirm that
the new methodology is competitive in a wide range of parameters, especially
where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with
significantly extended numerical experiments. Some unused material is remove
Equation-free implementation of statistical moment closures
We present a general numerical scheme for the practical implementation of
statistical moment closures suitable for modeling complex, large-scale,
nonlinear systems. Building on recently developed equation-free methods, this
approach numerically integrates the closure dynamics, the equations of which
may not even be available in closed form. Although closure dynamics introduce
statistical assumptions of unknown validity, they can have significant
computational advantages as they typically have fewer degrees of freedom and
may be much less stiff than the original detailed model. The closure method can
in principle be applied to a wide class of nonlinear problems, including
strongly-coupled systems (either deterministic or stochastic) for which there
may be no scale separation. We demonstrate the equation-free approach for
implementing entropy-based Eyink-Levermore closures on a nonlinear stochastic
partial differential equation.Comment: 7 pages, 2 figure
- …