8,513 research outputs found
Efficient approximation of functions of some large matrices by partial fraction expansions
Some important applicative problems require the evaluation of functions
of large and sparse and/or \emph{localized} matrices . Popular and
interesting techniques for computing and , where
is a vector, are based on partial fraction expansions. However,
some of these techniques require solving several linear systems whose matrices
differ from by a complex multiple of the identity matrix for computing
or require inverting sequences of matrices with the same
characteristics for computing . Here we study the use and the
convergence of a recent technique for generating sequences of incomplete
factorizations of matrices in order to face with both these issues. The
solution of the sequences of linear systems and approximate matrix inversions
above can be computed efficiently provided that shows certain decay
properties. These strategies have good parallel potentialities. Our claims are
confirmed by numerical tests
A nested Krylov subspace method to compute the sign function of large complex matrices
We present an acceleration of the well-established Krylov-Ritz methods to
compute the sign function of large complex matrices, as needed in lattice QCD
simulations involving the overlap Dirac operator at both zero and nonzero
baryon density. Krylov-Ritz methods approximate the sign function using a
projection on a Krylov subspace. To achieve a high accuracy this subspace must
be taken quite large, which makes the method too costly. The new idea is to
make a further projection on an even smaller, nested Krylov subspace. If
additionally an intermediate preconditioning step is applied, this projection
can be performed without affecting the accuracy of the approximation, and a
substantial gain in efficiency is achieved for both Hermitian and non-Hermitian
matrices. The numerical efficiency of the method is demonstrated on lattice
configurations of sizes ranging from 4^4 to 10^4, and the new results are
compared with those obtained with rational approximation methods.Comment: 17 pages, 12 figures, minor corrections, extended analysis of the
preconditioning ste
A Fast Algorithm for Parabolic PDE-based Inverse Problems Based on Laplace Transforms and Flexible Krylov Solvers
We consider the problem of estimating parameters in large-scale weakly
nonlinear inverse problems for which the underlying governing equations is a
linear, time-dependent, parabolic partial differential equation. A major
challenge in solving these inverse problems using Newton-type methods is the
computational cost associated with solving the forward problem and with
repeated construction of the Jacobian, which represents the sensitivity of the
measurements to the unknown parameters. Forming the Jacobian can be
prohibitively expensive because it requires repeated solutions of the forward
and adjoint time-dependent parabolic partial differential equations
corresponding to multiple sources and receivers. We propose an efficient method
based on a Laplace transform-based exponential time integrator combined with a
flexible Krylov subspace approach to solve the resulting shifted systems of
equations efficiently. Our proposed solver speeds up the computation of the
forward and adjoint problems, thus yielding significant speedup in total
inversion time. We consider an application from Transient Hydraulic Tomography
(THT), which is an imaging technique to estimate hydraulic parameters related
to the subsurface from pressure measurements obtained by a series of pumping
tests. The algorithms discussed are applied to a synthetic example taken from
THT to demonstrate the resulting computational gains of this proposed method
Optimal low-rank approximations of Bayesian linear inverse problems
In the Bayesian approach to inverse problems, data are often informative,
relative to the prior, only on a low-dimensional subspace of the parameter
space. Significant computational savings can be achieved by using this subspace
to characterize and approximate the posterior distribution of the parameters.
We first investigate approximation of the posterior covariance matrix as a
low-rank update of the prior covariance matrix. We prove optimality of a
particular update, based on the leading eigendirections of the matrix pencil
defined by the Hessian of the negative log-likelihood and the prior precision,
for a broad class of loss functions. This class includes the F\"{o}rstner
metric for symmetric positive definite matrices, as well as the
Kullback-Leibler divergence and the Hellinger distance between the associated
distributions. We also propose two fast approximations of the posterior mean
and prove their optimality with respect to a weighted Bayes risk under
squared-error loss. These approximations are deployed in an offline-online
manner, where a more costly but data-independent offline calculation is
followed by fast online evaluations. As a result, these approximations are
particularly useful when repeated posterior mean evaluations are required for
multiple data sets. We demonstrate our theoretical results with several
numerical examples, including high-dimensional X-ray tomography and an inverse
heat conduction problem. In both of these examples, the intrinsic
low-dimensional structure of the inference problem can be exploited while
producing results that are essentially indistinguishable from solutions
computed in the full space
- …