2,336 research outputs found
Eager Markov Chains
Abstract. We consider infinite-state discrete Markov chains which are eager: the probability of avoiding a defined set of final states for more thanÒsteps is bounded by some exponentially decreasing function�(Ò). We prove that eager Markov chains include those induced by Probabilistic Lossy Channel Systems, Probabilistic Vector Addition Systems with States, and Noisy Turing Machines, and that the bounding function�(Ò) can be effectively constructed for them. Furthermore, we study the problem of computing the expected reward (or cost) of runs until reaching the final states, where rewards are assigned to individual runs by computable reward functions. For eager Markov chains, an effective path exploration scheme, based on forward reachability analysis, can be used to approximate the expected reward up-to an arbitrarily small error.
Limiting Behavior of Markov Chains with Eager Attractors
We consider discrete infinite-state Markov chains which contain an eager finite attractor. A finite attractor is a finite subset of states that is eventually reached with probability 1 from every other state, and the eagerness condition requires that the probability of avoiding the attractor in n or more steps after leaving it is exponentially bounded in n. Examples of such Markov chains are those induced by probabilistic lossy channel systems and similar systems. We show that the expected residence time (a generalization of the steady state distribution) exists for Markov chains with eager attractors and that it can be effectively approximated to arbitrary precision. Furthermore, arbitrarily close approximations of the limiting average expected reward, with respect to state-based bounded reward functions, are also computable.
Approximately Sampling Elements with Fixed Rank in Graded Posets
Graded posets frequently arise throughout combinatorics, where it is natural
to try to count the number of elements of a fixed rank. These counting problems
are often -complete, so we consider approximation algorithms for
counting and uniform sampling. We show that for certain classes of posets,
biased Markov chains that walk along edges of their Hasse diagrams allow us to
approximately generate samples with any fixed rank in expected polynomial time.
Our arguments do not rely on the typical proofs of log-concavity, which are
used to construct a stationary distribution with a specific mode in order to
give a lower bound on the probability of outputting an element of the desired
rank. Instead, we infer this directly from bounds on the mixing time of the
chains through a method we call .
A noteworthy application of our method is sampling restricted classes of
integer partitions of . We give the first provably efficient Markov chain
algorithm to uniformly sample integer partitions of from general restricted
classes. Several observations allow us to improve the efficiency of this chain
to require space, and for unrestricted integer partitions,
expected time. Related applications include sampling permutations
with a fixed number of inversions and lozenge tilings on the triangular lattice
with a fixed average height.Comment: 23 pages, 12 figure
Simple bounds for queueing systems with breakdowns
Computationally attractive and intuitively obvious simple bounds are proposed for finite service systems which are subject to random breakdowns. The services are assumed to be exponential. The up and down periods are allowed to be generally distributed. The bounds are based on product-form modifications and depend only on means. A formal proof is presented. This proof is of interest in itself. Numerical support indicates a potential usefulness for quick engineering and performance evaluation purposes
Large deviation asymptotics and control variates for simulating large functions
Consider the normalized partial sums of a real-valued function of a
Markov chain, The
chain takes values in a general state space ,
with transition kernel , and it is assumed that the Lyapunov drift condition
holds: where , , the set is small and dominates . Under these
assumptions, the following conclusions are obtained: 1. It is known that this
drift condition is equivalent to the existence of a unique invariant
distribution satisfying , and the law of large numbers
holds for any function dominated by :
2. The lower error
probability defined by , for , ,
satisfies a large deviation limit theorem when the function satisfies a
monotonicity condition. Under additional minor conditions an exact large
deviations expansion is obtained. 3. If is near-monotone, then
control-variates are constructed based on the Lyapunov function , providing
a pair of estimators that together satisfy nontrivial large asymptotics for the
lower and upper error probabilities. In an application to simulation of queues
it is shown that exact large deviation asymptotics are possible even when the
estimator does not satisfy a central limit theorem.Comment: Published at http://dx.doi.org/10.1214/105051605000000737 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Statistically-secure ORAM with Overhead
We demonstrate a simple, statistically secure, ORAM with computational
overhead ; previous ORAM protocols achieve only
computational security (under computational assumptions) or require
overheard. An additional benefit of our ORAM is its
conceptual simplicity, which makes it easy to implement in both software and
(commercially available) hardware.
Our construction is based on recent ORAM constructions due to Shi, Chan,
Stefanov, and Li (Asiacrypt 2011) and Stefanov and Shi (ArXiv 2012), but with
some crucial modifications in the algorithm that simplifies the ORAM and enable
our analysis. A central component in our analysis is reducing the analysis of
our algorithm to a "supermarket" problem; of independent interest (and of
importance to our analysis,) we provide an upper bound on the rate of "upset"
customers in the "supermarket" problem
- …