660 research outputs found
On dimension reduction in Gaussian filters
A priori dimension reduction is a widely adopted technique for reducing the
computational complexity of stationary inverse problems. In this setting, the
solution of an inverse problem is parameterized by a low-dimensional basis that
is often obtained from the truncated Karhunen-Loeve expansion of the prior
distribution. For high-dimensional inverse problems equipped with smoothing
priors, this technique can lead to drastic reductions in parameter dimension
and significant computational savings.
In this paper, we extend the concept of a priori dimension reduction to
non-stationary inverse problems, in which the goal is to sequentially infer the
state of a dynamical system. Our approach proceeds in an offline-online
fashion. We first identify a low-dimensional subspace in the state space before
solving the inverse problem (the offline phase), using either the method of
"snapshots" or regularized covariance estimation. Then this subspace is used to
reduce the computational complexity of various filtering algorithms - including
the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within
a novel subspace-constrained Bayesian prediction-and-update procedure (the
online phase). We demonstrate the performance of our new dimension reduction
approach on various numerical examples. In some test cases, our approach
reduces the dimensionality of the original problem by orders of magnitude and
yields up to two orders of magnitude in computational savings
Optimal low-rank approximations of Bayesian linear inverse problems
In the Bayesian approach to inverse problems, data are often informative,
relative to the prior, only on a low-dimensional subspace of the parameter
space. Significant computational savings can be achieved by using this subspace
to characterize and approximate the posterior distribution of the parameters.
We first investigate approximation of the posterior covariance matrix as a
low-rank update of the prior covariance matrix. We prove optimality of a
particular update, based on the leading eigendirections of the matrix pencil
defined by the Hessian of the negative log-likelihood and the prior precision,
for a broad class of loss functions. This class includes the F\"{o}rstner
metric for symmetric positive definite matrices, as well as the
Kullback-Leibler divergence and the Hellinger distance between the associated
distributions. We also propose two fast approximations of the posterior mean
and prove their optimality with respect to a weighted Bayes risk under
squared-error loss. These approximations are deployed in an offline-online
manner, where a more costly but data-independent offline calculation is
followed by fast online evaluations. As a result, these approximations are
particularly useful when repeated posterior mean evaluations are required for
multiple data sets. We demonstrate our theoretical results with several
numerical examples, including high-dimensional X-ray tomography and an inverse
heat conduction problem. In both of these examples, the intrinsic
low-dimensional structure of the inference problem can be exploited while
producing results that are essentially indistinguishable from solutions
computed in the full space
Computational methods for large-scale inverse problems:a survey on hybrid projection methods
This paper surveys animportant class of methods that combine iterative projection methods and variational regularization methods for large-scale inverse problems. Iterative methods such as Krylov subspace methods are invaluable in the numerical linear algebra community and have proved important in solving inverse problems due to their inherent regularizing properties and their ability to handle large-scale problems. Variational regularization describes abroad and important class of methods that are used to obtain reliable solutions to inverse problems, whereby one solves a modified problem that incorporates prior knowledge. Hybrid projection methods combine iterative projection methods with variational regularization techniques in a synergistic way, providing researchers with a powerful computational framework for solving very large inverse problems. Although the idea of a hybrid Krylov method for linear inverse problems goes back to the 1980s, several recent advances on new regularization frameworks and methodologies have made this field ripe for extensions, further analyses, and new applications. In this paper, we provide a practical and accessible introduction to hybrid projection methods in the context of solving large (linear) inverse problems
Computational methods for large-scale inverse problems:a survey on hybrid projection methods
This paper surveys animportant class of methods that combine iterative projection methods and variational regularization methods for large-scale inverse problems. Iterative methods such as Krylov subspace methods are invaluable in the numerical linear algebra community and have proved important in solving inverse problems due to their inherent regularizing properties and their ability to handle large-scale problems. Variational regularization describes abroad and important class of methods that are used to obtain reliable solutions to inverse problems, whereby one solves a modified problem that incorporates prior knowledge. Hybrid projection methods combine iterative projection methods with variational regularization techniques in a synergistic way, providing researchers with a powerful computational framework for solving very large inverse problems. Although the idea of a hybrid Krylov method for linear inverse problems goes back to the 1980s, several recent advances on new regularization frameworks and methodologies have made this field ripe for extensions, further analyses, and new applications. In this paper, we provide a practical and accessible introduction to hybrid projection methods in the context of solving large (linear) inverse problems
Enabling and interpreting hyper-differential sensitivity analysis for Bayesian inverse problems
Inverse problems constrained by partial differential equations (PDEs) play a
critical role in model development and calibration. In many applications, there
are multiple uncertain parameters in a model which must be estimated. Although
the Bayesian formulation is attractive for such problems, computational cost
and high dimensionality frequently prohibit a thorough exploration of the
parametric uncertainty. A common approach is to reduce the dimension by fixing
some parameters (which we will call auxiliary parameters) to a best estimate
and using techniques from PDE-constrained optimization to approximate
properties of the Bayesian posterior distribution. For instance, the maximum a
posteriori probability (MAP) and the Laplace approximation of the posterior
covariance can be computed. In this article, we propose using
hyper-differential sensitivity analysis (HDSA) to assess the sensitivity of the
MAP point to changes in the auxiliary parameters. We establish an
interpretation of HDSA as correlations in the posterior distribution.
Foundational assumptions for HDSA require satisfaction of the optimality
conditions which are not always feasible or appropriate as a result of
ill-posedness in the inverse problem. We introduce novel theoretical and
computational approaches to justify and enable HDSA for ill-posed inverse
problems by projecting the sensitivities on likelihood informed subspaces and
defining a posteriori updates. Our proposed framework is demonstrated on a
nonlinear multi-physics inverse problem motivated by estimation of spatially
heterogenous material properties in the presence of spatially distributed
parametric modeling uncertainties.Comment: 31 page
- …