1,013 research outputs found
Convergence Analysis of Ensemble Kalman Inversion: The Linear, Noisy Case
We present an analysis of ensemble Kalman inversion, based on the continuous
time limit of the algorithm. The analysis of the dynamical behaviour of the
ensemble allows us to establish well-posedness and convergence results for a
fixed ensemble size. We will build on the results presented in [26] and
generalise them to the case of noisy observational data, in particular the
influence of the noise on the convergence will be investigated, both
theoretically and numerically. We focus on linear inverse problems where a very
complete theoretical analysis is possible
Analysis of the ensemble Kalman filter for inverse problems
The ensemble Kalman filter (EnKF) is a widely used methodology for state
estimation in partial, noisily observed dynamical systems, and for parameter
estimation in inverse problems. Despite its widespread use in the geophysical
sciences, and its gradual adoption in many other areas of application, analysis
of the method is in its infancy. Furthermore, much of the existing analysis
deals with the large ensemble limit, far from the regime in which the method is
typically used. The goal of this paper is to analyze the method when applied to
inverse problems with fixed ensemble size. A continuous-time limit is derived
and the long-time behavior of the resulting dynamical system is studied. Most
of the rigorous analysis is confined to the linear forward problem, where we
demonstrate that the continuous time limit of the EnKF corresponds to a set of
gradient flows for the data misfit in each ensemble member, coupled through a
common pre-conditioner which is the empirical covariance matrix of the
ensemble. Numerical results demonstrate that the conclusions of the analysis
extend beyond the linear inverse problem setting. Numerical experiments are
also given which demonstrate the benefits of various extensions of the basic
methodology
A strongly convergent numerical scheme from Ensemble Kalman inversion
The Ensemble Kalman methodology in an inverse problems setting can be viewed
as an iterative scheme, which is a weakly tamed discretization scheme for a
certain stochastic differential equation (SDE). Assuming a suitable
approximation result, dynamical properties of the SDE can be rigorously pulled
back via the discrete scheme to the original Ensemble Kalman inversion.
The results of this paper make a step towards closing the gap of the missing
approximation result by proving a strong convergence result in a simplified
model of a scalar stochastic differential equation. We focus here on a toy
model with similar properties than the one arising in the context of Ensemble
Kalman filter. The proposed model can be interpreted as a single particle
filter for a linear map and thus forms the basis for further analysis. The
difficulty in the analysis arises from the formally derived limiting SDE with
non-globally Lipschitz continuous nonlinearities both in the drift and in the
diffusion. Here the standard Euler-Maruyama scheme might fail to provide a
strongly convergent numerical scheme and taming is necessary. In contrast to
the strong taming usually used, the method presented here provides a weaker
form of taming.
We present a strong convergence analysis by first proving convergence on a
domain of high probability by using a cut-off or localisation, which then
leads, combined with bounds on moments for both the SDE and the numerical
scheme, by a bootstrapping argument to strong convergence
On the Convergence of the Laplace Approximation and Noise-Level-Robustness of Laplace-based Monte Carlo Methods for Bayesian Inverse Problems
The Bayesian approach to inverse problems provides a rigorous framework for
the incorporation and quantification of uncertainties in measurements,
parameters and models. We are interested in designing numerical methods which
are robust w.r.t. the size of the observational noise, i.e., methods which
behave well in case of concentrated posterior measures. The concentration of
the posterior is a highly desirable situation in practice, since it relates to
informative or large data. However, it can pose a computational challenge for
numerical methods based on the prior or reference measure. We propose to employ
the Laplace approximation of the posterior as the base measure for numerical
integration in this context. The Laplace approximation is a Gaussian measure
centered at the maximum a-posteriori estimate and with covariance matrix
depending on the logposterior density. We discuss convergence results of the
Laplace approximation in terms of the Hellinger distance and analyze the
efficiency of Monte Carlo methods based on it. In particular, we show that
Laplace-based importance sampling and Laplace-based quasi-Monte-Carlo methods
are robust w.r.t. the concentration of the posterior for large classes of
posterior distributions and integrands whereas prior-based importance sampling
and plain quasi-Monte Carlo are not. Numerical experiments are presented to
illustrate the theoretical findings.Comment: 50 pages, 11 figure
Sampling Sup-Normalized Spectral Functions for Brown-Resnick Processes
Sup-normalized spectral functions form building blocks of max-stable and
Pareto processes and therefore play an important role in modeling spatial
extremes. For one of the most popular examples, the Brown-Resnick process,
simulation is not straightforward. In this paper, we generalize two approaches
for simulation via Markov Chain Monte Carlo methods and rejection sampling by
introducing new classes of proposal densities. In both cases, we provide an
optimal choice of the proposal density with respect to sampling efficiency. The
performance of the procedures is demonstrated in an example.Comment: 11 pages, 2 figure
Ensemble Kalman filter for neural network based one-shot inversion
We study the use of novel techniques arising in machine learning for inverse
problems. Our approach replaces the complex forward model by a neural network,
which is trained simultaneously in a one-shot sense when estimating the unknown
parameters from data, i.e. the neural network is trained only for the unknown
parameter. By establishing a link to the Bayesian approach to inverse problems,
an algorithmic framework is developed which ensures the feasibility of the
parameter estimate w.r. to the forward model. We propose an efficient,
derivative-free optimization method based on variants of the ensemble Kalman
inversion. Numerical experiments show that the ensemble Kalman filter for
neural network based one-shot inversion is a promising direction combining
optimization and machine learning techniques for inverse problems
Quantification of airfoil geometry-induced aerodynamic uncertainties - comparison of approaches
Uncertainty quantification in aerodynamic simulations calls for efficient
numerical methods since it is computationally expensive, especially for the
uncertainties caused by random geometry variations which involve a large number
of variables. This paper compares five methods, including quasi-Monte Carlo
quadrature, polynomial chaos with coefficients determined by sparse quadrature
and gradient-enhanced version of Kriging, radial basis functions and point
collocation polynomial chaos, in their efficiency in estimating statistics of
aerodynamic performance upon random perturbation to the airfoil geometry which
is parameterized by 9 independent Gaussian variables. The results show that
gradient-enhanced surrogate methods achieve better accuracy than direct
integration methods with the same computational cost
Well Posedness and Convergence Analysis of the Ensemble Kalman Inversion
The ensemble Kalman inversion is widely used in practice to estimate unknown
parameters from noisy measurement data. Its low computational costs,
straightforward implementation, and non-intrusive nature makes the method
appealing in various areas of application. We present a complete analysis of
the ensemble Kalman inversion with perturbed observations for a fixed ensemble
size when applied to linear inverse problems. The well-posedness and
convergence results are based on the continuous time scaling limits of the
method. The resulting coupled system of stochastic differential equations
allows to derive estimates on the long-time behaviour and provides insights
into the convergence properties of the ensemble Kalman inversion. We view the
method as a derivative free optimization method for the least-squares misfit
functional, which opens up the perspective to use the method in various areas
of applications such as imaging, groundwater flow problems, biological problems
as well as in the context of the training of neural networks
How Self-Published Content Reaches the Public Library
When the e-book reader was popularized in the late 2000s, the book industry as a whole
soon had to adapt to mass readership of e-books. The rise of the e-book also meant the
rise of new sources of content – in particular, digitally self-published works. In the United
States, public libraries quickly established cooperative infrastructures that offered patrons
standardized access to self-published e-books. These new library infrastructures were
developed in the hopes of fostering a greater democratization of public writing and reading,
and also had far-reaching consequences for library licensing practices until the present day.
Based on a series of interviews with pioneers involved in the process of bringing self-published
content into the public library, this work is a contribution to early internet studies and traces
the emergence of innovative digital infrastructures in the public library
- …