5,256 research outputs found
Markovian Testing Equivalence and Exponentially Timed Internal Actions
In the theory of testing for Markovian processes developed so far,
exponentially timed internal actions are not admitted within processes. When
present, these actions cannot be abstracted away, because their execution takes
a nonzero amount of time and hence can be observed. On the other hand, they
must be carefully taken into account, in order not to equate processes that are
distinguishable from a timing viewpoint. In this paper, we recast the
definition of Markovian testing equivalence in the framework of a Markovian
process calculus including exponentially timed internal actions. Then, we show
that the resulting behavioral equivalence is a congruence, has a sound and
complete axiomatization, has a modal logic characterization, and can be decided
in polynomial time
The Spectra of Lamplighter Groups and Cayley Machines
We calculate the spectra and spectral measures associated to random walks on
restricted wreath products of finite groups with the infinite cyclic group, by
calculating the Kesten-von Neumann-Serre spectral measures for the random walks
on Schreier graphs of certain groups generated by automata. This generalises
the work of Grigorchuk and Zuk on the lamplighter group. In the process we
characterise when the usual spectral measure for a group generated by automata
coincides with the Kesten-von Neumann-Serre spectral measure.Comment: 36 pages, improved exposition, main results slightly strengthene
CHARDA: Causal Hybrid Automata Recovery via Dynamic Analysis
We propose and evaluate a new technique for learning hybrid automata
automatically by observing the runtime behavior of a dynamical system. Working
from a sequence of continuous state values and predicates about the
environment, CHARDA recovers the distinct dynamic modes, learns a model for
each mode from a given set of templates, and postulates causal guard conditions
which trigger transitions between modes. Our main contribution is the use of
information-theoretic measures (1)~as a cost function for data segmentation and
model selection to penalize over-fitting and (2)~to determine the likely causes
of each transition. CHARDA is easily extended with different classes of model
templates, fitting methods, or predicates. In our experiments on a complex
videogame character, CHARDA successfully discovers a reasonable
over-approximation of the character's true behaviors. Our results also compare
favorably against recent work in automatically learning probabilistic timed
automata in an aircraft domain: CHARDA exactly learns the modes of these
simpler automata.Comment: 7 pages, 2 figures. Accepted for IJCAI 201
Application of a Bayesian Inference Method to Reconstruct Short-Range Atmospheric Dispersion Events
In the event of an accidental or intentional release of chemical or biological (CB) agents into the atmosphere, first responders and decision makers need to rapidly locate and characterize the source of dispersion events using limited information from sensor networks. In this study the stochastic event reconstruction tool (SERT) is applied to a subset of the Fusing Sensor Information from Observing Networks (FUSION) Field Trial 2007 (FFT 07) database. The inference in SERT is based on Bayesian inference with Markov chain Monte Carlo (MCMC) sampling. SERT adopts a probability model that takes into account both positive and zero-reading sensors. In addition to the location and strength of the dispersion event, empirical parameters in the forward model are also estimated to establish a data-driven plume model. Results demonstrate the effectiveness of the Bayesian inference approach to characterize the source of a short range atmospheric release with uncertainty quantification
Bayesian Structural Inference for Hidden Processes
We introduce a Bayesian approach to discovering patterns in structurally
complex processes. The proposed method of Bayesian Structural Inference (BSI)
relies on a set of candidate unifilar HMM (uHMM) topologies for inference of
process structure from a data series. We employ a recently developed exact
enumeration of topological epsilon-machines. (A sequel then removes the
topological restriction.) This subset of the uHMM topologies has the added
benefit that inferred models are guaranteed to be epsilon-machines,
irrespective of estimated transition probabilities. Properties of
epsilon-machines and uHMMs allow for the derivation of analytic expressions for
estimating transition probabilities, inferring start states, and comparing the
posterior probability of candidate model topologies, despite process internal
structure being only indirectly present in data. We demonstrate BSI's
effectiveness in estimating a process's randomness, as reflected by the Shannon
entropy rate, and its structure, as quantified by the statistical complexity.
We also compare using the posterior distribution over candidate models and the
single, maximum a posteriori model for point estimation and show that the
former more accurately reflects uncertainty in estimated values. We apply BSI
to in-class examples of finite- and infinite-order Markov processes, as well to
an out-of-class, infinite-state hidden process.Comment: 20 pages, 11 figures, 1 table; supplementary materials, 15 pages, 11
figures, 6 tables; http://csc.ucdavis.edu/~cmg/compmech/pubs/bsihp.ht
Faster quantum mixing for slowly evolving sequences of Markov chains
Markov chain methods are remarkably successful in computational physics,
machine learning, and combinatorial optimization. The cost of such methods
often reduces to the mixing time, i.e., the time required to reach the steady
state of the Markov chain, which scales as , the inverse of the
spectral gap. It has long been conjectured that quantum computers offer nearly
generic quadratic improvements for mixing problems. However, except in special
cases, quantum algorithms achieve a run-time of , which introduces a costly dependence on the Markov chain size
not present in the classical case. Here, we re-address the problem of mixing of
Markov chains when these form a slowly evolving sequence. This setting is akin
to the simulated annealing setting and is commonly encountered in physics,
material sciences and machine learning. We provide a quantum memory-efficient
algorithm with a run-time of ,
neglecting logarithmic terms, which is an important improvement for large state
spaces. Moreover, our algorithms output quantum encodings of distributions,
which has advantages over classical outputs. Finally, we discuss the run-time
bounds of mixing algorithms and show that, under certain assumptions, our
algorithms are optimal.Comment: 20 pages, 2 figure
- …