31,724 research outputs found
Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors
The computational complexity of MCMC methods for the exploration of complex
probability measures is a challenging and important problem. A challenge of
particular importance arises in Bayesian inverse problems where the target
distribution may be supported on an infinite dimensional space. In practice
this involves the approximation of measures defined on sequences of spaces of
increasing dimension. Motivated by an elliptic inverse problem with
non-Gaussian prior, we study the design of proposal chains for the
Metropolis-Hastings algorithm with dimension independent performance.
Dimension-independent bounds on the Monte-Carlo error of MCMC sampling for
Gaussian prior measures have already been established. In this paper we provide
a simple recipe to obtain these bounds for non-Gaussian prior measures. To
illustrate the theory we consider an elliptic inverse problem arising in
groundwater flow. We explicitly construct an efficient Metropolis-Hastings
proposal based on local proposals, and we provide numerical evidence which
supports the theory.Comment: 26 pages, 7 figure
Surprise probabilities in Markov chains
In a Markov chain started at a state , the hitting time is the
first time that the chain reaches another state . We study the probability
that the first visit to occurs precisely at a
given time . Informally speaking, the event that a new state is visited at a
large time may be considered a "surprise". We prove the following three
bounds:
1) In any Markov chain with states, .
2) In a reversible chain with states, for .
3) For random walk on a simple graph with vertices,
.
We construct examples showing that these bounds are close to optimal. The
main feature of our bounds is that they require very little knowledge of the
structure of the Markov chain.
To prove the bound for random walk on graphs, we establish the following
estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication):
For random walk on an -vertex graph, for every initial vertex ,
\[ \sum_y \left( \sup_{t \ge 0} p^t(x, y) \right) = O(\log n). \
Stochastic Online Shortest Path Routing: The Value of Feedback
This paper studies online shortest path routing over multi-hop networks. Link
costs or delays are time-varying and modeled by independent and identically
distributed random processes, whose parameters are initially unknown. The
parameters, and hence the optimal path, can only be estimated by routing
packets through the network and observing the realized delays. Our aim is to
find a routing policy that minimizes the regret (the cumulative difference of
expected delay) between the path chosen by the policy and the unknown optimal
path. We formulate the problem as a combinatorial bandit optimization problem
and consider several scenarios that differ in where routing decisions are made
and in the information available when making the decisions. For each scenario,
we derive a tight asymptotic lower bound on the regret that has to be satisfied
by any online routing policy. These bounds help us to understand the
performance improvements we can expect when (i) taking routing decisions at
each hop rather than at the source only, and (ii) observing per-link delays
rather than end-to-end path delays. In particular, we show that (i) is of no
use while (ii) can have a spectacular impact. Three algorithms, with a
trade-off between computational complexity and performance, are proposed. The
regret upper bounds of these algorithms improve over those of the existing
algorithms, and they significantly outperform state-of-the-art algorithms in
numerical experiments.Comment: 18 page
- …