9 research outputs found
An Efficient Normalisation Procedure for Linear Temporal Logic and Very Weak Alternating Automata
In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem
stating that every formula of Past LTL (the extension of LTL with past
operators) is equivalent to a formula of the form , where
and contain only past operators. Some years later, Chang,
Manna, and Pnueli built on this result to derive a similar normal form for LTL.
Both normalisation procedures have a non-elementary worst-case blow-up, and
follow an involved path from formulas to counter-free automata to star-free
regular expressions and back to formulas. We improve on both points. We present
a direct and purely syntactic normalisation procedure for LTL yielding a normal
form, comparable to the one by Chang, Manna, and Pnueli, that has only a single
exponential blow-up. As an application, we derive a simple algorithm to
translate LTL into deterministic Rabin automata. The algorithm normalises the
formula, translates it into a special very weak alternating automaton, and
applies a simple determinisation procedure, valid only for these special
automata.Comment: This is the extended version of the referenced conference paper and
contains an appendix with additional materia
On Correctness, Precision, and Performance in Quantitative Verification: QComp 2020 Competition Report
Quantitative verification tools compute probabilities, expected rewards, or steady-state values for formal models of stochastic and timed systems. Exact results often cannot be obtained efficiently, so most tools use floating-point arithmetic in iterative algorithms that approximate the quantity of interest. Correctness is thus defined by the desired precision and determines performance. In this paper, we report on the experimental evaluation of these trade-offs performed in QComp 2020: the second friendly competition of tools for the analysis of quantitative formal models. We survey the precision guarantees - ranging from exact rational results to statistical confidence statements - offered by the nine participating tools. They gave rise to a performance evaluation using five tracks with varying correctness criteria, of which we present the results
Refining the Undecidability Border of Weak Bisimilarity
Weak bisimilarity is one of the most studied behavioural equivalences. This equivalence is undecidable for pushdown processes (PDA), process algebras (PA), and multiset automata (MSA, also known as parallel pushdown processes, PPDA). Its decidability is an open question for basic process algebras (BPA) and basic parallel processes (BPP). We move the undecidability border towards these classes by showing that the equivalence remains undecidable for weakly extended versions of BPA and BPP. In fact, we show that the weak bisimulation equivalence problem is undecidable even for normed subclasses of BPA and BPP extended with a finite constraint system
Stopping Criteria for Value Iteration on Stochastic Games with Quantitative Objectives.
A classic solution technique for Markov decision processes (MDP) and stochastic games (SG) is value iteration (VI). Due to its good practical performance, this approximative approach is typically preferred over exact techniques, even though no practical bounds on the imprecision of the result could be given until recently. As a consequence, even the most used model checkers could return arbitrarily wrong results. Over the past decade, different works derived stopping criteria, indicating when the precision reaches the desired level, for various settings, in particular MDP with reachability, total reward, and mean payoff, and SG with reachability.In this paper, we provide the first stopping criteria for VI on SG with total reward and mean payoff, yielding the first anytime algorithms in these settings. To this end, we provide the solution in two flavours: First through a reduction to the MDP case and second directly on SG. The former is simpler and automatically utilizes any advances on MDP. The latter allows for more local computations, heading towards better practical efficiency.Our solution unifies the previously mentioned approaches for MDP and SG and their underlying ideas. To achieve this, we isolate objective-specific subroutines as well as identify objective-independent concepts. These structural concepts, while surprisingly simple, form the very essence of the unified solution
Satisfiability Bounds for ω-Regular Properties in Bounded-Parameter Markov Decision Processes.
We consider the problem of computing minimum and maximum probabilities of satisfying an ω-regular property in a bounded-parameter Markov decision process (BMDP). BMDP arise from Markov decision processes (MDP) by allowing for uncertainty on the transition probabilities in the form of intervals where the actual probabilities are unknown. ω-regular languages form a large class of properties, expressible as, e.g., Rabin or parity automata, encompassing rich specifications such as linear temporal logic. In a BMDP the probability to satisfy the property depends on the unknown transitions probabilities as well as on the policy. In this paper, we compute the extreme values. This solves the problem specifically suggested by Dutreix and Coogan in CDC 2018, extending their results on interval Markov chains with no adversary. The main idea is to reinterpret their work as analysis of interval MDP and accordingly the BMDP problem as analysis of an ω-regular stochastic game, where a solution is provided. This method extends smoothly further to bounded-parameter stochastic games
On the Expressive Power of Extended Process Rewrite Systems
We provide a unified view on three extensions of Process rewrite systems (PRS) and compare their and PRS's expressive power. We show that the class of Petri Nets is less expressible up to bisimulation than the class of Process Algebra extended with finite state control unit. Further we show our main result that the reachability problem for PRS extended with a so called weak finite state unit is decidable
Guessing Winning Policies in LTL Synthesis by Semantic Learning.
We provide a learning-based technique for guessing a winning strategy in a parity game originating from an LTL synthesis problem. A cheaply obtained guess can be useful in several applications. Not only can the guessed strategy be applied as best-effort in cases where the game’s huge size prohibits rigorous approaches, but it can also increase the scalability of rigorous LTL synthesis in several ways. Firstly, checking whether a guessed strategy is winning is easier than constructing one. Secondly, even if the guess is wrong in some places, it can be fixed by strategy iteration faster than constructing one from scratch. Thirdly, the guess can be used in on-the-fly approaches to prioritize exploration in the most fruitful directions. In contrast to previous works, we (i) reflect the highly structured logical information in game’s states, the so-called semantic labelling, coming from the recent LTL-to-automata translations, and (ii) learn to reflect it properly by learning from previously solved games, bringing the solving process closer to human-like reasoning