109 research outputs found

    A strongly convergent numerical scheme from Ensemble Kalman inversion

    Get PDF
    The Ensemble Kalman methodology in an inverse problems setting can be viewed as an iterative scheme, which is a weakly tamed discretization scheme for a certain stochastic differential equation (SDE). Assuming a suitable approximation result, dynamical properties of the SDE can be rigorously pulled back via the discrete scheme to the original Ensemble Kalman inversion. The results of this paper make a step towards closing the gap of the missing approximation result by proving a strong convergence result in a simplified model of a scalar stochastic differential equation. We focus here on a toy model with similar properties than the one arising in the context of Ensemble Kalman filter. The proposed model can be interpreted as a single particle filter for a linear map and thus forms the basis for further analysis. The difficulty in the analysis arises from the formally derived limiting SDE with non-globally Lipschitz continuous nonlinearities both in the drift and in the diffusion. Here the standard Euler-Maruyama scheme might fail to provide a strongly convergent numerical scheme and taming is necessary. In contrast to the strong taming usually used, the method presented here provides a weaker form of taming. We present a strong convergence analysis by first proving convergence on a domain of high probability by using a cut-off or localisation, which then leads, combined with bounds on moments for both the SDE and the numerical scheme, by a bootstrapping argument to strong convergence

    On the Convergence of the Laplace Approximation and Noise-Level-Robustness of Laplace-based Monte Carlo Methods for Bayesian Inverse Problems

    Get PDF
    The Bayesian approach to inverse problems provides a rigorous framework for the incorporation and quantification of uncertainties in measurements, parameters and models. We are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose a computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate and with covariance matrix depending on the logposterior density. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and Laplace-based quasi-Monte-Carlo methods are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based importance sampling and plain quasi-Monte Carlo are not. Numerical experiments are presented to illustrate the theoretical findings.Comment: 50 pages, 11 figure

    MAP estimators for nonparametric Bayesian inverse problems in Banach spaces

    Full text link
    In order to rigorously define maximum-a-posteriori estimators for nonparametric Bayesian inverse problems for general Banach space valued parameters, we derive and prove certain previously postulated but unproven bounds on small ball probabilities. This allows us to prove existence of MAP estimators in the Banach space setting under very mild assumptions on the loglikelihood. As a similar statement so far (as far as the author is aware) only existed in the Hilbert space setting, this closes an important gap in the literature

    Well Posedness and Convergence Analysis of the Ensemble Kalman Inversion

    Get PDF
    The ensemble Kalman inversion is widely used in practice to estimate unknown parameters from noisy measurement data. Its low computational costs, straightforward implementation, and non-intrusive nature makes the method appealing in various areas of application. We present a complete analysis of the ensemble Kalman inversion with perturbed observations for a fixed ensemble size when applied to linear inverse problems. The well-posedness and convergence results are based on the continuous time scaling limits of the method. The resulting coupled system of stochastic differential equations allows to derive estimates on the long-time behaviour and provides insights into the convergence properties of the ensemble Kalman inversion. We view the method as a derivative free optimization method for the least-squares misfit functional, which opens up the perspective to use the method in various areas of applications such as imaging, groundwater flow problems, biological problems as well as in the context of the training of neural networks

    Maximum a posteriori estimators in l(p) are well-defined for diagonal Gaussian priors

    Get PDF
    We prove that maximum a posteriori estimators are well-defined for diagonal Gaussian priors ,u on tp under common assumptions on the potential F. Further, we show connections to the Onsager-Machlup functional and provide a corrected and strongly simplified proof in the Hilbert space case p= 2, previously established by Dashti et al (2013 Inverse Problems 29 095017); Kretschmann (2019 PhD Thesis). These corrections do not generalize to the setting 1 ? p < 8, which requires a novel convexification result for the difference between the Cameron-Martin norm and the p-norm

    Pricing-Prozesse für Komponentenhersteller im Maschinen- und Anlagenbau

    Full text link

    Nested Sampling for Uncertainty Quantification and Rare Event Estimation

    Full text link
    Nested Sampling is a method for computing the Bayesian evidence, also called the marginal likelihood, which is the integral of the likelihood with respect to the prior. More generally, it is a numerical probabilistic quadrature rule. The main idea of Nested Sampling is to replace a high-dimensional likelihood integral over parameter space with an integral over the unit line by employing a push-forward with respect to a suitable transformation. Practically, a set of active samples ascends the level sets of the integrand function, with the measure contraction of the super-level sets being statistically estimated. We justify the validity of this approach for integrands with non-negligible plateaus, and demonstrate Nested Sampling's practical effectiveness in estimating the (log-)probability of rare events.Comment: 24 page
    corecore