2,985 research outputs found
On the decay of the inverse of matrices that are sum of Kronecker products
Decay patterns of matrix inverses have recently attracted considerable
interest, due to their relevance in numerical analysis, and in applications
requiring matrix function approximations. In this paper we analyze the decay
pattern of the inverse of banded matrices in the form where is tridiagonal, symmetric and positive definite, is
the identity matrix, and stands for the Kronecker product. It is well
known that the inverses of banded matrices exhibit an exponential decay pattern
away from the main diagonal. However, the entries in show a
non-monotonic decay, which is not caught by classical bounds. By using an
alternative expression for , we derive computable upper bounds that
closely capture the actual behavior of its entries. We also show that similar
estimates can be obtained when has a larger bandwidth, or when the sum of
Kronecker products involves two different matrices. Numerical experiments
illustrating the new bounds are also reported
Trilogy on Computing Maximal Eigenpair
The eigenpair here means the twins consist of eigenvalue and its eigenvector.
This paper introduces the three steps of our study on computing the maximal
eigenpair. In the first two steps, we construct efficient initials for a known
but dangerous algorithm, first for tridiagonal matrices and then for
irreducible matrices, having nonnegative off-diagonal elements. In the third
step, we present two global algorithms which are still efficient and work well
for a quite large class of matrices, even complex for instance.Comment: Updated versio
Localization for MCMC: sampling high-dimensional posterior distributions with local structure
We investigate how ideas from covariance localization in numerical weather
prediction can be used in Markov chain Monte Carlo (MCMC) sampling of
high-dimensional posterior distributions arising in Bayesian inverse problems.
To localize an inverse problem is to enforce an anticipated "local" structure
by (i) neglecting small off-diagonal elements of the prior precision and
covariance matrices; and (ii) restricting the influence of observations to
their neighborhood. For linear problems we can specify the conditions under
which posterior moments of the localized problem are close to those of the
original problem. We explain physical interpretations of our assumptions about
local structure and discuss the notion of high dimensionality in local
problems, which is different from the usual notion of high dimensionality in
function space MCMC. The Gibbs sampler is a natural choice of MCMC algorithm
for localized inverse problems and we demonstrate that its convergence rate is
independent of dimension for localized linear problems. Nonlinear problems can
also be tackled efficiently by localization and, as a simple illustration of
these ideas, we present a localized Metropolis-within-Gibbs sampler. Several
linear and nonlinear numerical examples illustrate localization in the context
of MCMC samplers for inverse problems.Comment: 33 pages, 5 figure
- …