17,095 research outputs found
Extreme Quantum Advantage for Rare-Event Sampling
We introduce a quantum algorithm for efficient biased sampling of the rare
events generated by classical memoryful stochastic processes. We show that this
quantum algorithm gives an extreme advantage over known classical biased
sampling algorithms in terms of the memory resources required. The quantum
memory advantage ranges from polynomial to exponential and when sampling the
rare equilibrium configurations of spin systems the quantum advantage diverges.Comment: 11 pages, 9 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/eqafbs.ht
Strong memoryless times and rare events in Markov renewal point processes
Let W be the number of points in (0,t] of a stationary finite-state Markov
renewal point process. We derive a bound for the total variation distance
between the distribution of W and a compound Poisson distribution. For any
nonnegative random variable \zeta, we construct a ``strong memoryless time''
\hat \zeta such that \zeta-t is exponentially distributed conditional on {\hat
\zeta\leq t, \zeta>t}, for each t. This is used to embed the Markov renewal
point process into another such process whose state space contains a frequently
observed state which represents loss of memory in the original process. We then
write W as the accumulated reward of an embedded renewal reward process, and
use a compound Poisson approximation error bound for this quantity by
Erhardsson. For a renewal process, the bound depends in a simple way on the
first two moments of the interrenewal time distribution, and on two constants
obtained from the Radon-Nikodym derivative of the interrenewal time
distribution with respect to an exponential distribution.
For a Poisson process, the bound is 0.Comment: Published by the Institute of Mathematical Statistics
(http://www.imstat.org) in the Annals of Probability
(http://www.imstat.org/aop/) at http://dx.doi.org/10.1214/00911790400000005
Variable length Markov chains and dynamical sources
Infinite random sequences of letters can be viewed as stochastic chains or as
strings produced by a source, in the sense of information theory. The
relationship between Variable Length Markov Chains (VLMC) and probabilistic
dynamical sources is studied. We establish a probabilistic frame for context
trees and VLMC and we prove that any VLMC is a dynamical source for which we
explicitly build the mapping. On two examples, the ``comb'' and the ``bamboo
blossom'', we find a necessary and sufficient condition for the existence and
the unicity of a stationary probability measure for the VLMC. These two
examples are detailed in order to provide the associated Dirichlet series as
well as the generating functions of word occurrences.Comment: 45 pages, 15 figure
BioSimulator.jl: Stochastic simulation in Julia
Biological systems with intertwined feedback loops pose a challenge to
mathematical modeling efforts. Moreover, rare events, such as mutation and
extinction, complicate system dynamics. Stochastic simulation algorithms are
useful in generating time-evolution trajectories for these systems because they
can adequately capture the influence of random fluctuations and quantify rare
events. We present a simple and flexible package, BioSimulator.jl, for
implementing the Gillespie algorithm, -leaping, and related stochastic
simulation algorithms. The objective of this work is to provide scientists
across domains with fast, user-friendly simulation tools. We used the
high-performance programming language Julia because of its emphasis on
scientific computing. Our software package implements a suite of stochastic
simulation algorithms based on Markov chain theory. We provide the ability to
(a) diagram Petri Nets describing interactions, (b) plot average trajectories
and attached standard deviations of each participating species over time, and
(c) generate frequency distributions of each species at a specified time.
BioSimulator.jl's interface allows users to build models programmatically
within Julia. A model is then passed to the simulate routine to generate
simulation data. The built-in tools allow one to visualize results and compute
summary statistics. Our examples highlight the broad applicability of our
software to systems of varying complexity from ecology, systems biology,
chemistry, and genetics. The user-friendly nature of BioSimulator.jl encourages
the use of stochastic simulation, minimizes tedious programming efforts, and
reduces errors during model specification.Comment: 27 pages, 5 figures, 3 table
Variance Reduction Techniques in Monte Carlo Methods
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the introduction of computers. This increased computer power has stimulated simulation analysts to develop ever more realistic models, so that the net result has not been faster execution of simulation experiments; e.g., some modern simulation models need hours or days for a single ’run’ (one replication of one scenario or combination of simulation input values). Moreover there are some simulation models that represent rare events which have extremely small probabilities of occurrence), so even modern computer would take ’for ever’ (centuries) to execute a single run - were it not that special VRT can reduce theses excessively long runtimes to practical magnitudes.common random numbers;antithetic random numbers;importance sampling;control variates;conditioning;stratied sampling;splitting;quasi Monte Carlo
On the Performance of Short Block Codes over Finite-State Channels in the Rare-Transition Regime
As the mobile application landscape expands, wireless networks are tasked
with supporting different connection profiles, including real-time traffic and
delay-sensitive communications. Among many ensuing engineering challenges is
the need to better understand the fundamental limits of forward error
correction in non-asymptotic regimes. This article characterizes the
performance of random block codes over finite-state channels and evaluates
their queueing performance under maximum-likelihood decoding. In particular,
classical results from information theory are revisited in the context of
channels with rare transitions, and bounds on the probabilities of decoding
failure are derived for random codes. This creates an analysis framework where
channel dependencies within and across codewords are preserved. Such results
are subsequently integrated into a queueing problem formulation. For instance,
it is shown that, for random coding on the Gilbert-Elliott channel, the
performance analysis based on upper bounds on error probability provides very
good estimates of system performance and optimum code parameters. Overall, this
study offers new insights about the impact of channel correlation on the
performance of delay-aware, point-to-point communication links. It also
provides novel guidelines on how to select code rates and block lengths for
real-time traffic over wireless communication infrastructures
Strong and Weak Optimizations in Classical and Quantum Models of Stochastic Processes
Among the predictive hidden Markov models that describe a given stochastic
process, the {\epsilon}-machine is strongly minimal in that it minimizes every
R\'enyi-based memory measure. Quantum models can be smaller still. In contrast
with the {\epsilon}-machine's unique role in the classical setting, however,
among the class of processes described by pure-state hidden quantum Markov
models, there are those for which there does not exist any strongly minimal
model. Quantum memory optimization then depends on which memory measure best
matches a given problem circumstance.Comment: 14 pages, 14 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/uemum.ht
Dynamic Resource Management in Clouds: A Probabilistic Approach
Dynamic resource management has become an active area of research in the
Cloud Computing paradigm. Cost of resources varies significantly depending on
configuration for using them. Hence efficient management of resources is of
prime interest to both Cloud Providers and Cloud Users. In this work we suggest
a probabilistic resource provisioning approach that can be exploited as the
input of a dynamic resource management scheme. Using a Video on Demand use case
to justify our claims, we propose an analytical model inspired from standard
models developed for epidemiology spreading, to represent sudden and intense
workload variations. We show that the resulting model verifies a Large
Deviation Principle that statistically characterizes extreme rare events, such
as the ones produced by "buzz/flash crowd effects" that may cause workload
overflow in the VoD context. This analysis provides valuable insight on
expectable abnormal behaviors of systems. We exploit the information obtained
using the Large Deviation Principle for the proposed Video on Demand use-case
for defining policies (Service Level Agreements). We believe these policies for
elastic resource provisioning and usage may be of some interest to all
stakeholders in the emerging context of cloud networkingComment: IEICE Transactions on Communications (2012). arXiv admin note:
substantial text overlap with arXiv:1209.515
- …