2,336 research outputs found

    Eager Markov Chains

    Get PDF
    Abstract. We consider infinite-state discrete Markov chains which are eager: the probability of avoiding a defined set of final states for more thanÒsteps is bounded by some exponentially decreasing function�(Ò). We prove that eager Markov chains include those induced by Probabilistic Lossy Channel Systems, Probabilistic Vector Addition Systems with States, and Noisy Turing Machines, and that the bounding function�(Ò) can be effectively constructed for them. Furthermore, we study the problem of computing the expected reward (or cost) of runs until reaching the final states, where rewards are assigned to individual runs by computable reward functions. For eager Markov chains, an effective path exploration scheme, based on forward reachability analysis, can be used to approximate the expected reward up-to an arbitrarily small error.

    Limiting Behavior of Markov Chains with Eager Attractors

    Get PDF
    We consider discrete infinite-state Markov chains which contain an eager finite attractor. A finite attractor is a finite subset of states that is eventually reached with probability 1 from every other state, and the eagerness condition requires that the probability of avoiding the attractor in n or more steps after leaving it is exponentially bounded in n. Examples of such Markov chains are those induced by probabilistic lossy channel systems and similar systems. We show that the expected residence time (a generalization of the steady state distribution) exists for Markov chains with eager attractors and that it can be effectively approximated to arbitrary precision. Furthermore, arbitrarily close approximations of the limiting average expected reward, with respect to state-based bounded reward functions, are also computable.

    Approximately Sampling Elements with Fixed Rank in Graded Posets

    Full text link
    Graded posets frequently arise throughout combinatorics, where it is natural to try to count the number of elements of a fixed rank. These counting problems are often #P\#\textbf{P}-complete, so we consider approximation algorithms for counting and uniform sampling. We show that for certain classes of posets, biased Markov chains that walk along edges of their Hasse diagrams allow us to approximately generate samples with any fixed rank in expected polynomial time. Our arguments do not rely on the typical proofs of log-concavity, which are used to construct a stationary distribution with a specific mode in order to give a lower bound on the probability of outputting an element of the desired rank. Instead, we infer this directly from bounds on the mixing time of the chains through a method we call balanced bias\textit{balanced bias}. A noteworthy application of our method is sampling restricted classes of integer partitions of nn. We give the first provably efficient Markov chain algorithm to uniformly sample integer partitions of nn from general restricted classes. Several observations allow us to improve the efficiency of this chain to require O(n1/2log(n))O(n^{1/2}\log(n)) space, and for unrestricted integer partitions, expected O(n9/4)O(n^{9/4}) time. Related applications include sampling permutations with a fixed number of inversions and lozenge tilings on the triangular lattice with a fixed average height.Comment: 23 pages, 12 figure

    Simple bounds for queueing systems with breakdowns

    Get PDF
    Computationally attractive and intuitively obvious simple bounds are proposed for finite service systems which are subject to random breakdowns. The services are assumed to be exponential. The up and down periods are allowed to be generally distributed. The bounds are based on product-form modifications and depend only on means. A formal proof is presented. This proof is of interest in itself. Numerical support indicates a potential usefulness for quick engineering and performance evaluation purposes

    Large deviation asymptotics and control variates for simulating large functions

    Full text link
    Consider the normalized partial sums of a real-valued function FF of a Markov chain, ϕn:=n1k=0n1F(Φ(k)),n1.\phi_n:=n^{-1}\sum_{k=0}^{n-1}F(\Phi(k)),\qquad n\ge1. The chain {Φ(k):k0}\{\Phi(k):k\ge0\} takes values in a general state space X\mathsf {X}, with transition kernel PP, and it is assumed that the Lyapunov drift condition holds: PVVW+bICPV\le V-W+b\mathbb{I}_C where V:X(0,)V:\mathsf {X}\to(0,\infty), W:X[1,)W:\mathsf {X}\to[1,\infty), the set CC is small and WW dominates FF. Under these assumptions, the following conclusions are obtained: 1. It is known that this drift condition is equivalent to the existence of a unique invariant distribution π\pi satisfying π(W)<\pi(W)<\infty, and the law of large numbers holds for any function FF dominated by WW: ϕnϕ:=π(F),a.s.,n.\phi_n\to\phi:=\pi(F),\qquad{a.s.}, n\to\infty. 2. The lower error probability defined by P{ϕnc}\mathsf {P}\{\phi_n\le c\}, for c<ϕc<\phi, n1n\ge1, satisfies a large deviation limit theorem when the function FF satisfies a monotonicity condition. Under additional minor conditions an exact large deviations expansion is obtained. 3. If WW is near-monotone, then control-variates are constructed based on the Lyapunov function VV, providing a pair of estimators that together satisfy nontrivial large asymptotics for the lower and upper error probabilities. In an application to simulation of queues it is shown that exact large deviation asymptotics are possible even when the estimator does not satisfy a central limit theorem.Comment: Published at http://dx.doi.org/10.1214/105051605000000737 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Statistically-secure ORAM with O~(log2n)\tilde{O}(\log^2 n) Overhead

    Full text link
    We demonstrate a simple, statistically secure, ORAM with computational overhead O~(log2n)\tilde{O}(\log^2 n); previous ORAM protocols achieve only computational security (under computational assumptions) or require Ω~(log3n)\tilde{\Omega}(\log^3 n) overheard. An additional benefit of our ORAM is its conceptual simplicity, which makes it easy to implement in both software and (commercially available) hardware. Our construction is based on recent ORAM constructions due to Shi, Chan, Stefanov, and Li (Asiacrypt 2011) and Stefanov and Shi (ArXiv 2012), but with some crucial modifications in the algorithm that simplifies the ORAM and enable our analysis. A central component in our analysis is reducing the analysis of our algorithm to a "supermarket" problem; of independent interest (and of importance to our analysis,) we provide an upper bound on the rate of "upset" customers in the "supermarket" problem
    corecore