26 research outputs found
A hybrid FETI-DP method for non-smooth random partial differential equations
A domain decomposition approach exploiting the localization of random parameters in high-dimensional random PDEs is presented. For high efficiency, surrogate models in multi-element representations are computed locally when possible. This makes use of a stochastic Galerkin FETI-DP formulation of the underlying problem with localized representations of involved input random fields. The local parameter space associated to a subdomain is explored by a subdivision into regions where the parametric surrogate accuracy can be trusted and where instead Monte Carlo sampling has to be employed. A heuristic adaptive algorithm carries out a problem-dependent hp refinement in a stochastic multi-element sense, enlarging the trusted surrogate region in local parametric space as far as possible. This results in an efficient global parameter to solution sampling scheme making use of local parametric smoothness exploration in the involved surrogate construction. Adequately structured problems for this scheme occur naturally when uncertainties are defined on sub-domains, e.g. in a multi-physics setting, or when the Karhunen-Loeve expansion of a random field can be localized. The efficiency of this hybrid technique is demonstrated with numerical benchmark problems illustrating the identification of trusted (possibly higher order) surrogate regions and non-trusted sampling regions
Recommended from our members
A local hybrid surrogate-based finite element tearing interconnecting dual-primal method for nonsmooth random partial differential equations
A domain decomposition approach for high-dimensional random partial differential equations exploiting the localization of random parameters is presented. To obtain high efficiency, surrogate models in multielement representations in the parameter space are constructed locally when possible. The method makes use of a stochastic Galerkin finite element tearing interconnecting dual-primal formulation of the underlying problem with localized representations of involved input random fields. Each local parameter space associated to a subdomain is explored by a subdivision into regions where either the parametric surrogate accuracy can be trusted or where instead one has to resort to Monte Carlo. A heuristic adaptive algorithm carries out a problem-dependent hp-refinement in a stochastic multielement sense, anisotropically enlarging the trusted surrogate region as far as possible. This results in an efficient global parameter to solution sampling scheme making use of local parametric smoothness exploration for the surrogate construction. Adequately structured problems for this scheme occur naturally when uncertainties are defined on subdomains, for example, in a multiphysics setting, or when the KarhunenâLoève expansion of a random field can be localized. The efficiency of the proposed hybrid technique is assessed with numerical benchmark problems illustrating the identification of trusted (possibly higher order) surrogate regions and nontrusted sampling regions. Š 2020 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd
A local hybrid surrogateâbased finite element tearing interconnecting dualâprimal method for nonsmooth random partial differential equations
A domain decomposition approach for highâdimensional random partial differential equations exploiting the localization of random parameters is presented. To obtain high efficiency, surrogate models in multielement representations in the parameter space are constructed locally when possible. The method makes use of a stochastic Galerkin finite element tearing interconnecting dualâprimal formulation of the underlying problem with localized representations of involved input random fields. Each local parameter space associated to a subdomain is explored by a subdivision into regions where either the parametric surrogate accuracy can be trusted or where instead one has to resort to Monte Carlo. A heuristic adaptive algorithm carries out a problemâdependent hpârefinement in a stochastic multielement sense, anisotropically enlarging the trusted surrogate region as far as possible. This results in an efficient global parameter to solution sampling scheme making use of local parametric smoothness exploration for the surrogate construction. Adequately structured problems for this scheme occur naturally when uncertainties are defined on subdomains, for example, in a multiphysics setting, or when the KarhunenâLoève expansion of a random field can be localized. The efficiency of the proposed hybrid technique is assessed with numerical benchmark problems illustrating the identification of trusted (possibly higher order) surrogate regions and nontrusted sampling regions
Less interaction with forward models in Langevin dynamics
Ensemble methods have become ubiquitous for the solution of Bayesian inference problems. State-of-the-art Langevin samplers such as the Ensemble Kalman Sampler (EKS), Affine Invariant Langevin Dynamics (ALDI) or its extension using weighted covariance estimates rely on successive evaluations of the forward model or its gradient. A main drawback of these methods hence is their vast number of required forward calls as well as their possible lack of convergence in the case of more involved posterior measures such as multimodal distributions. The goal of this paper is to address these challenges to some extend. First, several possible adaptive ensemble enrichment strategies that successively enlarge the number of particles in the underlying Langevin dynamics are discusses that in turn lead to a significant reduction of the total number of forward calls. Second, analytical consistency guarantees of the ensemble enrichment method are provided for linear forward models. Third, to address more involved target distributions, the method is extended by applying adapted Langevin dynamics based on a homotopy formalism for which convergence is proved. Finally, numerical investigations of several benchmark problems illustrates the possible gain of the proposed method, comparing it to state-of-the-art Langevin samplers
Recommended from our members
Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion
A novel method for the accurate functional approximation of possibly highly concentrated probability densities is developed. It is based on the combination of several modern techniques such as transport maps and nonintrusive reconstructions of low-rank tensor representations. The central idea is to carry out computations for statistical quantities of interest such as moments with a convenient reference measure which is approximated by an numerical transport, leading to a perturbed prior. Subsequently, a coordinate transformation leads to a beneficial setting for the further function approximation. An efficient layer based transport construction is realized by using the Variational Monte Carlo (VMC) method. The convergence analysis covers all terms introduced by the different (deterministic and statistical) approximations in the Hellinger distance and the Kullback-Leibler divergence. Important applications are presented and in particular the context of Bayesian inverse problems is illuminated which is a central motivation for the developed approach. Several numerical examples illustrate the efficacy with densities of different complexity
Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion
A novel method for the accurate functional approximation of possibly highly concentrated probability densities is developed. It is based on the combination of several modern techniques such as transport maps and nonintrusive reconstructions of low-rank tensor representations. The central idea is to carry out computations for statistical quantities of interest such as moments with a convenient reference measure which is approximated by an numerical transport, leading to a perturbed prior. Subsequently, a coordinate transformation leads to a beneficial setting for the further function approximation. An efficient layer based transport construction is realized by using the Variational Monte Carlo (VMC) method. The convergence analysis covers all terms introduced by the different (deterministic and statistical) approximations in the Hellinger distance and the Kullback-Leibler divergence. Important applications are presented and in particular the context of Bayesian inverse problems is illuminated which is a central motivation for the developed approach. Several numerical examples illustrate the efficacy with densities of different complexity
Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion
This paper presents a novel method for the accurate functional approximation of possibly highly concentrated probability densities. It is based on the combination of several modern techniques such as transport maps and low-rank approximations via a nonintrusive tensor train reconstruction. The central idea is to carry out computations for statistical quantities of interest such as moments based on a convenient representation of a reference density for which accurate numerical methods can be employed. Since the transport from target to reference can usually not be determined exactly, one has to cope with a perturbed reference density due to a numerically approximated transport map. By the introduction of a layered approximation and appropriate coordinate transformations, the problem is split into a set of independent approximations in seperately chosen orthonormal basis functions, combining the notions h- and p-refinement (i.e. âmesh sizeâ and polynomial degree). An efficient low-rank representation of the perturbed reference density is achieved via the Variational Monte Carlo method. This nonintrusive regression technique reconstructs the map in the tensor train format. An a priori convergence analysis with respect to the error terms introduced by the different (deterministic and statistical) approximations in the Hellinger distance and the KullbackâLeibler divergence is derived. Important applications are presented and in particular the context of Bayesian inverse problems is illuminated which is a main motivation for the developed approach. Several numerical examples illustrate the efficacy with densities of different complexity and degrees of perturbation of the transport to the reference density. The (superior) convergence is demonstrated in comparison to Monte Carlo and Markov Chain Monte Carlo methods
Recommended from our members
Low rank surrogates for polymorphic fields with application to fuzzy-stochastic partial differential equations
We consider a general form of fuzzy-stochastic PDEs depending on the interaction of probabilistic and non-probabilistic ("possibilistic") influences. Such a combined modelling of aleatoric and epistemic uncertainties for instance can be applied beneficially in an engineering context for real-world applications, where probabilistic modelling and expert knowledge has to be accounted for. We examine existence and well-definedness of polymorphic PDEs in appropriate function spaces. The fuzzy-stochastic dependence is described in a high-dimensional parameter space, thus easily leading to an exponential complexity in practical computations. To aleviate this severe obstacle in practise, a compressed low-rank approximation of the problem formulation and the solution is derived. This is based on the Hierarchical Tucker format which is constructed with solution samples by a non-intrusive tensor reconstruction algorithm. The performance of the proposed model order reduction approach is demonstrated with two examples. One of these is the ubiquitous groundwater flow model with Karhunen-Loeve coefficient field which is generalized by a fuzzy correlation length
Recommended from our members
Numerical upscaling of parametric microstructures in a possibilistic uncertainty framework with tensor trains
A fuzzy arithmetic framework for the efficient possibilistic propagation of shape uncertainties based on a novel fuzzy edge detection method is introduced. The shape uncertainties stem from a blurred image that encodes the distribution of two phases in a composite material. The proposed framework employs computational homogenisation to upscale the shape uncertainty to a effective material with fuzzy material properties. For this, many samples of a linear elasticity problem have to be computed, which is significantly sped up by a highly accurate low-rank tensor surrogate. To ensure the continuity of the underlying mapping from shape parametrisation to the upscaled material behaviour, a diffeomorphism is constructed by generating an appropriate family of meshes via transformation of a reference mesh. The shape uncertainty is then propagated to measure the distance of the upscaled material to the isotropic and orthotropic material class. Finally, the fuzzy effective material is used to compute bounds for the average displacement of a non-homogenized material with uncertain star-shaped inclusion shapes