18,671 research outputs found
Optimal low-rank approximations of Bayesian linear inverse problems
In the Bayesian approach to inverse problems, data are often informative,
relative to the prior, only on a low-dimensional subspace of the parameter
space. Significant computational savings can be achieved by using this subspace
to characterize and approximate the posterior distribution of the parameters.
We first investigate approximation of the posterior covariance matrix as a
low-rank update of the prior covariance matrix. We prove optimality of a
particular update, based on the leading eigendirections of the matrix pencil
defined by the Hessian of the negative log-likelihood and the prior precision,
for a broad class of loss functions. This class includes the F\"{o}rstner
metric for symmetric positive definite matrices, as well as the
Kullback-Leibler divergence and the Hellinger distance between the associated
distributions. We also propose two fast approximations of the posterior mean
and prove their optimality with respect to a weighted Bayes risk under
squared-error loss. These approximations are deployed in an offline-online
manner, where a more costly but data-independent offline calculation is
followed by fast online evaluations. As a result, these approximations are
particularly useful when repeated posterior mean evaluations are required for
multiple data sets. We demonstrate our theoretical results with several
numerical examples, including high-dimensional X-ray tomography and an inverse
heat conduction problem. In both of these examples, the intrinsic
low-dimensional structure of the inference problem can be exploited while
producing results that are essentially indistinguishable from solutions
computed in the full space
Some matrix nearness problems suggested by Tikhonov regularization
The numerical solution of linear discrete ill-posed problems typically
requires regularization, i.e., replacement of the available ill-conditioned
problem by a nearby better conditioned one. The most popular regularization
methods for problems of small to moderate size are Tikhonov regularization and
truncated singular value decomposition (TSVD). By considering matrix nearness
problems related to Tikhonov regularization, several novel regularization
methods are derived. These methods share properties with both Tikhonov
regularization and TSVD, and can give approximate solutions of higher quality
than either one of these methods
An Efficient Approach for Computing Optimal Low-Rank Regularized Inverse Matrices
Standard regularization methods that are used to compute solutions to
ill-posed inverse problems require knowledge of the forward model. In many
real-life applications, the forward model is not known, but training data is
readily available. In this paper, we develop a new framework that uses training
data, as a substitute for knowledge of the forward model, to compute an optimal
low-rank regularized inverse matrix directly, allowing for very fast
computation of a regularized solution. We consider a statistical framework
based on Bayes and empirical Bayes risk minimization to analyze theoretical
properties of the problem. We propose an efficient rank update approach for
computing an optimal low-rank regularized inverse matrix for various error
measures. Numerical experiments demonstrate the benefits and potential
applications of our approach to problems in signal and image processing.Comment: 24 pages, 11 figure
A GCV based Arnoldi-Tikhonov regularization method
For the solution of linear discrete ill-posed problems, in this paper we
consider the Arnoldi-Tikhonov method coupled with the Generalized Cross
Validation for the computation of the regularization parameter at each
iteration. We study the convergence behavior of the Arnoldi method and its
properties for the approximation of the (generalized) singular values, under
the hypothesis that Picard condition is satisfied. Numerical experiments on
classical test problems and on image restoration are presented
- …