23,117 research outputs found
Linear Distances between Markov Chains
We introduce a general class of distances (metrics) between Markov chains,
which are based on linear behaviour. This class encompasses distances given
topologically (such as the total variation distance or trace distance) as well
as by temporal logics or automata. We investigate which of the distances can be
approximated by observing the systems, i.e. by black-box testing or simulation,
and we provide both negative and positive results
On Probabilistic Applicative Bisimulation and Call-by-Value -Calculi (Long Version)
Probabilistic applicative bisimulation is a recently introduced coinductive
methodology for program equivalence in a probabilistic, higher-order, setting.
In this paper, the technique is applied to a typed, call-by-value,
lambda-calculus. Surprisingly, the obtained relation coincides with context
equivalence, contrary to what happens when call-by-name evaluation is
considered. Even more surprisingly, full-abstraction only holds in a symmetric
setting.Comment: 30 page
Unprovability of the Logical Characterization of Bisimulation
We quickly review labelled Markov processes (LMP) and provide a
counterexample showing that in general measurable spaces, event bisimilarity
and state bisimilarity differ in LMP. This shows that the logic in Desharnais
[*] does not characterize state bisimulation in non-analytic measurable spaces.
Furthermore we show that, under current foundations of Mathematics, such
logical characterization is unprovable for spaces that are projections of a
coanalytic set. Underlying this construction there is a proof that stationary
Markov processes over general measurable spaces do not have semi-pullbacks.
([*] J. Desharnais, Labelled Markov Processes. School of Computer Science.
McGill University, Montr\'eal (1999))Comment: Extended introduction and comments; extra section on semi-pullbacks;
11 pages Some background details added; extra example on the non-locality of
state bisimilarity; 14 page
Distribution-based bisimulation for labelled Markov processes
In this paper we propose a (sub)distribution-based bisimulation for labelled
Markov processes and compare it with earlier definitions of state and event
bisimulation, which both only compare states. In contrast to those state-based
bisimulations, our distribution bisimulation is weaker, but corresponds more
closely to linear properties. We construct a logic and a metric to describe our
distribution bisimulation and discuss linearity, continuity and compositional
properties.Comment: Accepted by FORMATS 201
Segregating Event Streams and Noise with a Markov Renewal Process Model
DS and MP are supported by EPSRC Leadership Fellowship EP/G007144/1
- …