31,724 research outputs found

    Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors

    Full text link
    The computational complexity of MCMC methods for the exploration of complex probability measures is a challenging and important problem. A challenge of particular importance arises in Bayesian inverse problems where the target distribution may be supported on an infinite dimensional space. In practice this involves the approximation of measures defined on sequences of spaces of increasing dimension. Motivated by an elliptic inverse problem with non-Gaussian prior, we study the design of proposal chains for the Metropolis-Hastings algorithm with dimension independent performance. Dimension-independent bounds on the Monte-Carlo error of MCMC sampling for Gaussian prior measures have already been established. In this paper we provide a simple recipe to obtain these bounds for non-Gaussian prior measures. To illustrate the theory we consider an elliptic inverse problem arising in groundwater flow. We explicitly construct an efficient Metropolis-Hastings proposal based on local proposals, and we provide numerical evidence which supports the theory.Comment: 26 pages, 7 figure

    Surprise probabilities in Markov chains

    Full text link
    In a Markov chain started at a state xx, the hitting time τ(y)\tau(y) is the first time that the chain reaches another state yy. We study the probability Px(τ(y)=t)\mathbf{P}_x(\tau(y) = t) that the first visit to yy occurs precisely at a given time tt. Informally speaking, the event that a new state is visited at a large time tt may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with nn states, Px(τ(y)=t)nt\mathbf{P}_x(\tau(y) = t) \le \frac{n}{t}. 2) In a reversible chain with nn states, Px(τ(y)=t)2nt\mathbf{P}_x(\tau(y) = t) \le \frac{\sqrt{2n}}{t} for t4n+4t \ge 4n + 4. 3) For random walk on a simple graph with n2n \ge 2 vertices, Px(τ(y)=t)4elognt\mathbf{P}_x(\tau(y) = t) \le \frac{4e \log n}{t}. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an nn-vertex graph, for every initial vertex xx, \[ \sum_y \left( \sup_{t \ge 0} p^t(x, y) \right) = O(\log n). \

    Stochastic Online Shortest Path Routing: The Value of Feedback

    Full text link
    This paper studies online shortest path routing over multi-hop networks. Link costs or delays are time-varying and modeled by independent and identically distributed random processes, whose parameters are initially unknown. The parameters, and hence the optimal path, can only be estimated by routing packets through the network and observing the realized delays. Our aim is to find a routing policy that minimizes the regret (the cumulative difference of expected delay) between the path chosen by the policy and the unknown optimal path. We formulate the problem as a combinatorial bandit optimization problem and consider several scenarios that differ in where routing decisions are made and in the information available when making the decisions. For each scenario, we derive a tight asymptotic lower bound on the regret that has to be satisfied by any online routing policy. These bounds help us to understand the performance improvements we can expect when (i) taking routing decisions at each hop rather than at the source only, and (ii) observing per-link delays rather than end-to-end path delays. In particular, we show that (i) is of no use while (ii) can have a spectacular impact. Three algorithms, with a trade-off between computational complexity and performance, are proposed. The regret upper bounds of these algorithms improve over those of the existing algorithms, and they significantly outperform state-of-the-art algorithms in numerical experiments.Comment: 18 page
    corecore