12 research outputs found
Recursive Probabilistic Models: efficient analysis and implementation
This thesis examines Recursive Markov Chains (RMCs), their natural extensions and
connection to other models. RMCs can model in a natural way probabilistic procedural
programs and other systems that involve recursion and probability. An RMC
is a set of ordinary finite state Markov Chains that are allowed to call each other recursively
and it describes a potentially infinite, but countable, state ordinary Markov
Chain. RMCs generalize in a precise sense several well studied probabilistic models
in other domains such as natural language processing (Stochastic Context-Free Grammars),
population dynamics (Multi-Type Branching Processes) and in queueing theory
(Quasi-Birth-Death processes (QBDs)). In addition, RMCs can be extended to a
controlled version called Recursive Markov Decision Processes (RMDPs) and also a
game version referred to as Recursive (Simple) Stochastic Games (RSSGs). For analyzing
RMCs, RMDPs, RSSGs we devised highly optimized numerical algorithms and
implemented them in a tool called PReMo (Probabilistic Recursive Models analyzer).
PReMo allows computation of the termination probability and expected termination
time of RMCs and QBDs, and a restricted subset of RMDPs and RSSGs. The input
models are described by the user in specifically designed simple input languages. Furthermore,
in order to analyze the worst and best expected running time of probabilistic
recursive programs we study models of RMDPs and RSSGs with positive rewards
assigned to each of their transitions and provide new complexity upper and lower
bounds of their analysis. We also establish some new connections between our models
and models studied in queueing theory. Specifically, we show that (discrete time)
QBDs can be described as a special subclass of RMCs and Tree-like QBDs, which are a
generalization of QBDs, are equivalent to RMCs in a precise sense. We also prove that
for a given QBD we can compute (in the unit cost RAM model) an approximation of
its termination probabilities within i bits of precision in time polynomial in the size of
the QBD and linear in i. Specifically, we show that we can do this using a decomposed
Newton’s method
Verification problems for timed and probabilistic extensions of Petri Nets
In the first part of the thesis, we prove the decidability (and PSPACE-completeness) of
the universal safety property on a timed extension of Petri Nets, called Timed Petri Nets.
Every token has a real-valued clock (a.k.a. age), and transition firing is constrained by
the clock values that have integer bounds (using strict and non-strict inequalities). The
newly created tokens can either inherit the age from an input token of the transition or
it can be reset to zero.
In the second part of the thesis, we refer to systems with controlled behaviour that
are probabilistic extensions of VASS and One-Counter Automata. Firstly, we consider
infinite state Markov Decision Processes (MDPs) that are induced by probabilistic
extensions of VASS, called VASS-MDPs. We show that most of the qualitative problems
for general VASS-MDPs are undecidable, and consider a monotone subclass in which
only the controller can change the counter values, called 1-VASS-MDPs. In particular,
we show that limit-sure control state reachability for 1-VASS-MDPs is decidable, i.e.,
checking whether one can reach a set of control states with probability arbitrarily close
to 1. Unlike for finite state MDPs, the control state reachability property may hold limit
surely (i.e. using an infinite family of strategies, each of which achieving the objective
with probability ≥ 1-e, for every e > 0), but not almost surely (i.e. with probability 1).
Secondly, we consider infinite state MDPs that are induced by probabilistic extensions of
One-Counter Automata, called One-Counter Markov Decision Processes (OC-MDPs).
We show that the almost-sure {1;2;3}-Parity problem for OC-MDPs is at least as hard
as the limit-sure selective termination problem for OC-MDPs, in which one would
like to reach a particular set of control states and counter value zero with probability
arbitrarily close to 1
Model reduction techniques for probabilistic verification of Markov chains
Probabilistic model checking is a quantitative verification technique that aims to verify the correctness of probabilistic systems. Nevertheless, it suffers from the so-called state space explosion problem. In this thesis, we propose two new model reduction techniques to improve the efficiency and scalability of verifying probabilistic systems, focusing on discrete-time Markov chains (DTMCs). In particular, our emphasis is on verifying quantitative properties that bound the time or cost of an execution. We also focus on methods that avoid the explicit construction of the full state space.
We first present a finite-horizon variant of probabilistic bisimulation for DTMCs, which preserves a bounded fragment of PCTL. We also propose another model reduction technique that reduces what we call linear inductive DTMCs, a class of models whose state space grows linearly with respect to a parameter.
All the techniques presented in this thesis were developed in the PRISM model checker. We demonstrate the effectiveness of our work by applying it to a selection of existing benchmark probabilistic models, showing that both of our two new approaches can provide significant reductions in model size and in some cases outperform the existing implementations of probabilistic verification in PRISM
Quantitative verification of gossip protocols for certificate transparency
Certificate transparency is a promising solution to publicly auditing Internet certificates. However, there is the potential of split-world attacks, where users are directed to fake versions of the log where they may accept fraudulent certificates. To ensure users are seeing the same version of a log, gossip protocols have been designed where users share and verify log-generated data. This thesis proposes a methodology of evaluating such protocols using probabilistic model checking, a collection of techniques for formally verifying properties of stochastic systems. It also describes the approach to modelling and verifying the protocols and analysing several aspects, including the success rate of detecting inconsistencies in gossip messages and its efficiency in terms of bandwidth. This thesis also compares different protocol variants and suggests ways to augment the protocol to improve performances, using model checking to verify the claims. To address uncertainty and unscalability issues within the models, this thesis shows how to transform models by allowing the probability of certain events to lie within a range of values, and abstract them to make the verification process more efficient. Lastly, by parameterising the models, this thesis shows how to search possible model configurations to find the worst-case behaviour for certain formal properties
Time-bounded reachability in tree-structured QBDs by abstraction
This paper studies quantitative model checking of infinite tree-like (continuous-time) Markov chains. These tree-structured quasi-birth death processes are equivalent to probabilistic pushdown automata and recursive Markov chains and are widely used in the field of performance evaluation. We determine time-bounded reachability probabilities in these processes–which with direct methods, i.e., uniformization, result in an exponential blow-up–by applying abstraction. We contrast abstraction based on Markov decision processes (MDPs) and interval-based abstraction; study various schemes to partition the state space, and empirically show their influence on the accuracy of the obtained reachability probabilities. Results show that grid-like schemes, in contrast to chain- and tree-like ones, yield extremely precise approximations for rather coarse abstractions