825 research outputs found
SOS rule formats for convex and abstract probabilistic bisimulations
Probabilistic transition system specifications (PTSSs) in the format provide structural operational semantics for
Segala-type systems that exhibit both probabilistic and nondeterministic
behavior and guarantee that bisimilarity is a congruence for all operator
defined in such format. Starting from the
format, we obtain restricted formats that guarantee that three coarser
bisimulation equivalences are congruences. We focus on (i) Segala's variant of
bisimulation that considers combined transitions, which we call here "convex
bisimulation"; (ii) the bisimulation equivalence resulting from considering
Park & Milner's bisimulation on the usual stripped probabilistic transition
system (translated into a labelled transition system), which we call here
"probability obliterated bisimulation"; and (iii) a "probability abstracted
bisimulation", which, like bisimulation, preserves the structure of the
distributions but instead, it ignores the probability values. In addition, we
compare these bisimulation equivalences and provide a logic characterization
for each of them.Comment: In Proceedings EXPRESS/SOS 2015, arXiv:1508.0634
A Definition Scheme for Quantitative Bisimulation
FuTS, state-to-function transition systems are generalizations of labeled
transition systems and of familiar notions of quantitative semantical models as
continuous-time Markov chains, interactive Markov chains, and Markov automata.
A general scheme for the definition of a notion of strong bisimulation
associated with a FuTS is proposed. It is shown that this notion of
bisimulation for a FuTS coincides with the coalgebraic notion of behavioral
equivalence associated to the functor on Set given by the type of the FuTS. For
a series of concrete quantitative semantical models the notion of bisimulation
as reported in the literature is proven to coincide with the notion of
quantitative bisimulation obtained from the scheme. The comparison includes
models with orthogonal behaviour, like interactive Markov chains, and with
multiple levels of behavior, like Markov automata. As a consequence of the
general result relating FuTS bisimulation and behavioral equivalence we obtain,
in a systematic way, a coalgebraic underpinning of all quantitative
bisimulations discussed.Comment: In Proceedings QAPL 2015, arXiv:1509.0816
Probabilistic Opacity for Markov Decision Processes
Opacity is a generic security property, that has been defined on (non
probabilistic) transition systems and later on Markov chains with labels. For a
secret predicate, given as a subset of runs, and a function describing the view
of an external observer, the value of interest for opacity is a measure of the
set of runs disclosing the secret. We extend this definition to the richer
framework of Markov decision processes, where non deterministic choice is
combined with probabilistic transitions, and we study related decidability
problems with partial or complete observation hypotheses for the schedulers. We
prove that all questions are decidable with complete observation and
-regular secrets. With partial observation, we prove that all
quantitative questions are undecidable but the question whether a system is
almost surely non opaque becomes decidable for a restricted class of
-regular secrets, as well as for all -regular secrets under
finite-memory schedulers
Mutation testing from probabilistic finite state machines
Mutation testing traditionally involves mutating a program in order to produce a set of mutants and using these mutants in order to either estimate the effectiveness of a test suite or to drive test generation. Recently, however, this approach has been applied to specifications such as those written as finite state machines. This paper extends mutation testing to finite state machine models in which transitions have associated probabilities. The paper describes several ways of mutating a probabilistic finite state machine (PFSM) and shows how test sequences that distinguish between a PFSM and its mutants can be generated. Testing then involves applying each test sequence multiple times, observing the resultant output sequences and using results from statistical sampling theory in order to compare the observed frequency of each output sequence with that expected
p-automata: acceptors for Markov Chains
We present p-automata, which accept an entire Markov chain as input. Acceptance is determined by solving a sequence of stochastic weak and weak games. The set of languages of Markov chains obtained in this way is closed under Boolean operations. Language emptiness and containment are equi-solvable, and languages themselves are closed under bisimulation. A Markov chain (respectively, PCTL formula) determines a p-automaton whose language is the bisimulation equivalence class of that Markov chain (respectively, the set of models of that formula). We define a simulation game between p-automata, decidable in EXPTIME. Simulation under-approximates language containment, whose decidability status is presently unknown
Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy
A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, using analytic tools adapted from the classical setting we show there is no collapse in the probabilistic hierarchy: more distributions become definable at each level. We also address related issues such as closure under probabilistic conditioning
Computing the Least Fixed Point of Positive Polynomial Systems
We consider equation systems of the form X_1 = f_1(X_1, ..., X_n), ..., X_n =
f_n(X_1, ..., X_n) where f_1, ..., f_n are polynomials with positive real
coefficients. In vector form we denote such an equation system by X = f(X) and
call f a system of positive polynomials, short SPP. Equation systems of this
kind appear naturally in the analysis of stochastic models like stochastic
context-free grammars (with numerous applications to natural language
processing and computational biology), probabilistic programs with procedures,
web-surfing models with back buttons, and branching processes. The least
nonnegative solution mu f of an SPP equation X = f(X) is of central interest
for these models. Etessami and Yannakakis have suggested a particular version
of Newton's method to approximate mu f.
We extend a result of Etessami and Yannakakis and show that Newton's method
starting at 0 always converges to mu f. We obtain lower bounds on the
convergence speed of the method. For so-called strongly connected SPPs we prove
the existence of a threshold k_f such that for every i >= 0 the (k_f+i)-th
iteration of Newton's method has at least i valid bits of mu f. The proof
yields an explicit bound for k_f depending only on syntactic parameters of f.
We further show that for arbitrary SPP equations Newton's method still
converges linearly: there are k_f>=0 and alpha_f>0 such that for every i>=0 the
(k_f+alpha_f i)-th iteration of Newton's method has at least i valid bits of mu
f. The proof yields an explicit bound for alpha_f; the bound is exponential in
the number of equations, but we also show that it is essentially optimal.
Constructing a bound for k_f is still an open problem. Finally, we also provide
a geometric interpretation of Newton's method for SPPs.Comment: This is a technical report that goes along with an article to appear
in SIAM Journal on Computing
- …