21 research outputs found
Transport map unadjusted Langevin algorithms: learning and discretizing perturbed samplers
Langevin dynamics are widely used in sampling high-dimensional, non-Gaussian
distributions whose densities are known up to a normalizing constant. In
particular, there is strong interest in unadjusted Langevin algorithms (ULA),
which directly discretize Langevin dynamics to estimate expectations over the
target distribution. We study the use of transport maps that approximately
normalize a target distribution as a way to precondition and accelerate the
convergence of Langevin dynamics. We show that in continuous time, when a
transport map is applied to Langevin dynamics, the result is a Riemannian
manifold Langevin dynamics (RMLD) with metric defined by the transport map. We
also show that applying a transport map to an irreversibly-perturbed ULA
results in a geometry-informed irreversible perturbation (GiIrr) of the
original dynamics. These connections suggest more systematic ways of learning
metrics and perturbations, and also yield alternative discretizations of the
RMLD described by the map, which we study. Under appropriate conditions, these
discretized processes can be endowed with non-asymptotic bounds describing
convergence to the target distribution in 2-Wasserstein distance. Illustrative
numerical results complement our theoretical claims.Comment: 28 pages, 12 figure
Subset simulation for probabilistic computer models
Reliability analysis can be performed efficiently through subset simulation. Through Markov chain Monte Carlo, subset simulation progressively samples from the input domain of a performance function (typically a computer model) to find the failure domain, that is, the set of input configurations that result in an output higher than a prescribed threshold. Recently, a probabilistic framework for numerical analysis was proposed, whereby computation is treated as a statistical inference problem. The framework, called probabilistic numerics, treats the output of a computer code as a random variable. This paper presents a generalisation of subset simulation, which enables reliability analysis for probabilistic numerical models. The advantages and challenges of the method are discussed, and an example with industrial application is presented
Kontextsensitive Modellhierarchien für Quantifizierung der höherdimensionalen Unsicherheit
We formulate four novel context-aware algorithms based on model hierarchies aimed to enable an efficient quantification of uncertainty in complex, computationally expensive problems, such as fluid-structure interaction and plasma microinstability simulations. Our results show that our algorithms are more efficient than standard approaches and that they are able to cope with the challenges of quantifying uncertainty in higher-dimensional, complex problems.Wir formulieren vier kontextsensitive Algorithmen auf der Grundlage von Modellhierarchien um eine effiziente Quantifizierung der Unsicherheit bei komplexen, rechenintensiven Problemen zu ermöglichen, wie Fluid-Struktur-Wechselwirkungs- und Plasma-Mikroinstabilitätssimulationen. Unsere Ergebnisse zeigen, dass unsere Algorithmen effizienter als Standardansätze sind und die Herausforderungen der Quantifizierung der Unsicherheit in höherdimensionalen, komplexen Problemen bewältigen können
Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators
We propose a novel machine learning framework for solving optimization
problems governed by large-scale partial differential equations (PDEs) with
high-dimensional random parameters. Such optimization under uncertainty (OUU)
problems may be computational prohibitive using classical methods, particularly
when a large number of samples is needed to evaluate risk measures at every
iteration of an optimization algorithm, where each sample requires the solution
of an expensive-to-solve PDE. To address this challenge, we propose a new
neural operator approximation of the PDE solution operator that has the
combined merits of (1) accurate approximation of not only the map from the
joint inputs of random parameters and optimization variables to the PDE state,
but also its derivative with respect to the optimization variables, (2)
efficient construction of the neural network using reduced basis architectures
that are scalable to high-dimensional OUU problems, and (3) requiring only a
limited number of training data to achieve high accuracy for both the PDE
solution and the OUU solution. We refer to such neural operators as multi-input
reduced basis derivative informed neural operators (MR-DINOs). We demonstrate
the accuracy and efficiency our approach through several numerical experiments,
i.e. the risk-averse control of a semilinear elliptic PDE and the steady state
Navier--Stokes equations in two and three spatial dimensions, each involving
random field inputs. Across the examples, MR-DINOs offer -- reductions in execution time, and are able to produce OUU solutions of
comparable accuracies to those from standard PDE based solutions while being
over more cost-efficient after factoring in the cost of
construction