9,286 research outputs found
Symblicit algorithms for optimal strategy synthesis in monotonic Markov decision processes
When treating Markov decision processes (MDPs) with large state spaces, using
explicit representations quickly becomes unfeasible. Lately, Wimmer et al. have
proposed a so-called symblicit algorithm for the synthesis of optimal
strategies in MDPs, in the quantitative setting of expected mean-payoff. This
algorithm, based on the strategy iteration algorithm of Howard and Veinott,
efficiently combines symbolic and explicit data structures, and uses binary
decision diagrams as symbolic representation. The aim of this paper is to show
that the new data structure of pseudo-antichains (an extension of antichains)
provides another interesting alternative, especially for the class of monotonic
MDPs. We design efficient pseudo-antichain based symblicit algorithms (with
open source implementations) for two quantitative settings: the expected
mean-payoff and the stochastic shortest path. For two practical applications
coming from automated planning and LTL synthesis, we report promising
experimental results w.r.t. both the run time and the memory consumption.Comment: In Proceedings SYNT 2014, arXiv:1407.493
Chaining Test Cases for Reactive System Testing (extended version)
Testing of synchronous reactive systems is challenging because long input
sequences are often needed to drive them into a state at which a desired
feature can be tested. This is particularly problematic in on-target testing,
where a system is tested in its real-life application environment and the time
required for resetting is high. This paper presents an approach to discovering
a test case chain---a single software execution that covers a group of test
goals and minimises overall test execution time. Our technique targets the
scenario in which test goals for the requirements are given as safety
properties. We give conditions for the existence and minimality of a single
test case chain and minimise the number of test chains if a single test chain
is infeasible. We report experimental results with a prototype tool for C code
generated from Simulink models and compare it to state-of-the-art test suite
generators.Comment: extended version of paper published at ICTSS'1
Symbolic Algorithms for Qualitative Analysis of Markov Decision Processes with B\"uchi Objectives
We consider Markov decision processes (MDPs) with \omega-regular
specifications given as parity objectives. We consider the problem of computing
the set of almost-sure winning states from where the objective can be ensured
with probability 1. The algorithms for the computation of the almost-sure
winning set for parity objectives iteratively use the solutions for the
almost-sure winning set for B\"uchi objectives (a special case of parity
objectives). Our contributions are as follows: First, we present the first
subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs
with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as
compared to the previous known algorithm that takes O(n^2) symbolic steps,
where is the number of states and is the number of edges of the MDP. In
practice MDPs have constant out-degree, and then our symbolic algorithm takes
O(n \sqrt{n}) symbolic steps, as compared to the previous known
symbolic steps algorithm. Second, we present a new algorithm, namely win-lose
algorithm, with the following two properties: (a) the algorithm iteratively
computes subsets of the almost-sure winning set and its complement, as compared
to all previous algorithms that discover the almost-sure winning set upon
termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the
maximal number of edges of strongly connected components (scc's) of the MDP.
The win-lose algorithm requires symbolic computation of scc's. Third, we
improve the algorithm for symbolic scc computation; the previous known
algorithm takes linear symbolic steps, and our new algorithm improves the
constants associated with the linear number of steps. In the worst case the
previous known algorithm takes 5n symbolic steps, whereas our new algorithm
takes 4n symbolic steps
Counterexample Generation in Probabilistic Model Checking
Providing evidence for the refutation of a property is an essential, if not the most important, feature of model checking. This paper considers algorithms for counterexample generation for probabilistic CTL formulae in discrete-time Markov chains. Finding the strongest evidence (i.e., the most probable path) violating a (bounded) until-formula is shown to be reducible to a single-source (hop-constrained) shortest path problem. Counterexamples of smallest size that deviate most from the required probability bound can be obtained by applying (small amendments to) k-shortest (hop-constrained) paths algorithms. These results can be extended to Markov chains with rewards, to LTL model checking, and are useful for Markov decision processes. Experimental results show that typically the size of a counterexample is excessive. To obtain much more compact representations, we present a simple algorithm to generate (minimal) regular expressions that can act as counterexamples. The feasibility of our approach is illustrated by means of two communication protocols: leader election in an anonymous ring network and the Crowds protocol
- …