2,432 research outputs found
Limit Synchronization in Markov Decision Processes
Markov decision processes (MDP) are finite-state systems with both strategic
and probabilistic choices. After fixing a strategy, an MDP produces a sequence
of probability distributions over states. The sequence is eventually
synchronizing if the probability mass accumulates in a single state, possibly
in the limit. Precisely, for 0 <= p <= 1 the sequence is p-synchronizing if a
probability distribution in the sequence assigns probability at least p to some
state, and we distinguish three synchronization modes: (i) sure winning if
there exists a strategy that produces a 1-synchronizing sequence; (ii)
almost-sure winning if there exists a strategy that produces a sequence that
is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure
winning if for all epsilon > 0, there exists a strategy that produces a
(1-epsilon)-synchronizing sequence.
We consider the problem of deciding whether an MDP is sure, almost-sure,
limit-sure winning, and we establish the decidability and optimal complexity
for all modes, as well as the memory requirements for winning strategies. Our
main contributions are as follows: (a) for each winning modes we present
characterizations that give a PSPACE complexity for the decision problems, and
we establish matching PSPACE lower bounds; (b) we show that for sure winning
strategies, exponential memory is sufficient and may be necessary, and that in
general infinite memory is necessary for almost-sure winning, and unbounded
memory is necessary for limit-sure winning; (c) along with our results, we
establish new complexity results for alternating finite automata over a
one-letter alphabet
Synchronization and Control in Intrinsic and Designed Computation: An Information-Theoretic Analysis of Competing Models of Stochastic Computation
We adapt tools from information theory to analyze how an observer comes to
synchronize with the hidden states of a finitary, stationary stochastic
process. We show that synchronization is determined by both the process's
internal organization and by an observer's model of it. We analyze these
components using the convergence of state-block and block-state entropies,
comparing them to the previously known convergence properties of the Shannon
block entropy. Along the way, we introduce a hierarchy of information
quantifiers as derivatives and integrals of these entropies, which parallels a
similar hierarchy introduced for block entropy. We also draw out the duality
between synchronization properties and a process's controllability. The tools
lead to a new classification of a process's alternative representations in
terms of minimality, synchronizability, and unifilarity.Comment: 25 pages, 13 figures, 1 tabl
High-level Counterexamples for Probabilistic Automata
Providing compact and understandable counterexamples for violated system
properties is an essential task in model checking. Existing works on
counterexamples for probabilistic systems so far computed either a large set of
system runs or a subset of the system's states, both of which are of limited
use in manual debugging. Many probabilistic systems are described in a guarded
command language like the one used by the popular model checker PRISM. In this
paper we describe how a smallest possible subset of the commands can be
identified which together make the system erroneous. We additionally show how
the selected commands can be further simplified to obtain a well-understandable
counterexample
A Compositional Semantics for Stochastic Reo Connectors
In this paper we present a compositional semantics for the channel-based
coordination language Reo which enables the analysis of quality of service
(QoS) properties of service compositions. For this purpose, we annotate Reo
channels with stochastic delay rates and explicitly model data-arrival rates at
the boundary of a connector, to capture its interaction with the services that
comprise its environment. We propose Stochastic Reo automata as an extension of
Reo automata, in order to compositionally derive a QoS-aware semantics for Reo.
We further present a translation of Stochastic Reo automata to Continuous-Time
Markov Chains (CTMCs). This translation enables us to use third-party CTMC
verification tools to do an end-to-end performance analysis of service
compositions.Comment: In Proceedings FOCLASA 2010, arXiv:1007.499
Markov two-components processes
We propose Markov two-components processes (M2CP) as a probabilistic model of
asynchronous systems based on the trace semantics for concurrency. Considering
an asynchronous system distributed over two sites, we introduce concepts and
tools to manipulate random trajectories in an asynchronous framework: stopping
times, an Asynchronous Strong Markov property, recurrent and transient states
and irreducible components of asynchronous probabilistic processes. The
asynchrony assumption implies that there is no global totally ordered clock
ruling the system. Instead, time appears as partially ordered and random. We
construct and characterize M2CP through a finite family of transition matrices.
M2CP have a local independence property that guarantees that local components
are independent in the probabilistic sense, conditionally to their
synchronization constraints. A synchronization product of two Markov chains is
introduced, as a natural example of M2CP.Comment: 34 page
Synchronizing Objectives for Markov Decision Processes
We introduce synchronizing objectives for Markov decision processes (MDP).
Intuitively, a synchronizing objective requires that eventually, at every step
there is a state which concentrates almost all the probability mass. In
particular, it implies that the probabilistic system behaves in the long run
like a deterministic system: eventually, the current state of the MDP can be
identified with almost certainty.
We study the problem of deciding the existence of a strategy to enforce a
synchronizing objective in MDPs. We show that the problem is decidable for
general strategies, as well as for blind strategies where the player cannot
observe the current state of the MDP. We also show that pure strategies are
sufficient, but memory may be necessary.Comment: In Proceedings iWIGP 2011, arXiv:1102.374
- …