3,022 research outputs found
Quasi-Monte Carlo Methods for some Linear Algebra Problems. Convergence and Complexity
We present quasi-Monte Carlo analogs of Monte Carlo methods
for some linear algebra problems: solving systems of linear equations,
computing extreme eigenvalues, and matrix inversion. Reformulating the
problems as solving integral equations with a special kernels and domains
permits us to analyze the quasi-Monte Carlo methods with bounds from
numerical integration. Standard Monte Carlo methods for integration provide
a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo
methods use quasirandom sequences with the resulting convergence rate for
numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically
and through numerical tests that the use of quasirandom sequences
improves both the magnitude of the error and the convergence rate of the
considered Monte Carlo methods. We also analyze the complexity of considered
quasi-Monte Carlo algorithms and compare them to the complexity
of the analogous Monte Carlo and deterministic algorithms.* This work is supported by the National Science Fund of Bulgaria under Grant
No. D002-146/16.12.2008
Proposals which speed-up function-space MCMC
Inverse problems lend themselves naturally to a Bayesian formulation, in
which the quantity of interest is a posterior distribution of state and/or
parameters given some uncertain observations. For the common case in which the
forward operator is smoothing, then the inverse problem is ill-posed.
Well-posedness is imposed via regularisation in the form of a prior, which is
often Gaussian. Under quite general conditions, it can be shown that the
posterior is absolutely continuous with respect to the prior and it may be
well-defined on function space in terms of its density with respect to the
prior. In this case, by constructing a proposal for which the prior is
invariant, one can define Metropolis-Hastings schemes for MCMC which are
well-defined on function space, and hence do not degenerate as the dimension of
the underlying quantity of interest increases to infinity, e.g. under mesh
refinement when approximating PDE in finite dimensions. However, in practice,
despite the attractive theoretical properties of the currently available
schemes, they may still suffer from long correlation times, particularly if the
data is very informative about some of the unknown parameters. In fact, in this
case it may be the directions of the posterior which coincide with the (already
known) prior which decorrelate the slowest. The information incorporated into
the posterior through the data is often contained within some
finite-dimensional subspace, in an appropriate basis, perhaps even one defined
by eigenfunctions of the prior. We aim to exploit this fact and improve the
mixing time of function-space MCMC by careful rescaling of the proposal. To
this end, we introduce two new basic methods of increasing complexity,
involving (i) characteristic function truncation of high frequencies and (ii)
hessian information to interpolate between low and high frequencies
Multiple Extremal Eigenpairs by the Power Method
We report the production and benchmarking of several refinements of the power
method that enable the computation of multiple extremal eigenpairs of very
large matrices. In these refinements we used an observation by Booth that has
made possible the calculation of up to the 10 eigenpair for simple test
problems simulating the transport of neutrons in the steady state of a nuclear
reactor. Here, we summarize our techniques and efforts to-date on determining
mainly just the two largest or two smallest eigenpairs. To illustrate the
effectiveness of the techniques, we determined the two extremal eigenpairs of a
cyclic matrix, the transfer matrix of the two-dimensional Ising model, and the
Hamiltonian matrix of the one-dimensional Hubbard model.Comment: 29 papes, no figure
The performance of the quantum adiabatic algorithm on random instances of two optimization problems on regular hypergraphs
In this paper we study the performance of the quantum adiabatic algorithm on
random instances of two combinatorial optimization problems, 3-regular 3-XORSAT
and 3-regular Max-Cut. The cost functions associated with these two
clause-based optimization problems are similar as they are both defined on
3-regular hypergraphs. For 3-regular 3-XORSAT the clauses contain three
variables and for 3-regular Max-Cut the clauses contain two variables. The
quantum adiabatic algorithms we study for these two problems use interpolating
Hamiltonians which are stoquastic and therefore amenable to sign-problem free
quantum Monte Carlo and quantum cavity methods. Using these techniques we find
that the quantum adiabatic algorithm fails to solve either of these problems
efficiently, although for different reasons.Comment: 20 pages, 15 figure
- …