41 research outputs found
From LTL and Limit-Deterministic B\"uchi Automata to Deterministic Parity Automata
Controller synthesis for general linear temporal logic (LTL) objectives is a
challenging task. The standard approach involves translating the LTL objective
into a deterministic parity automaton (DPA) by means of the Safra-Piterman
construction. One of the challenges is the size of the DPA, which often grows
very fast in practice, and can reach double exponential size in the length of
the LTL formula. In this paper we describe a single exponential translation
from limit-deterministic B\"uchi automata (LDBA) to DPA, and show that it can
be concatenated with a recent efficient translation from LTL to LDBA to yield a
double exponential, \enquote{Safraless} LTL-to-DPA construction. We also report
on an implementation, a comparison with the SPOT library, and performance on
several sets of formulas, including instances from the 2016 SyntComp
competition
Is there a best Büchi automaton for explicit model checking?
LTL to Büchi automata (BA) translators are traditionally optimized to produce automata with a small number of states or a small number of non-deterministic states. In this paper, we search for properties of Büchi automata that really influence the performance of explicit model checkers. We do that by manual analysis of several automata and by experiments with common LTL-to-BA translators and realistic verification tasks. As a result of these experiences, we gain a better insight into the characteristics of automata that work well with Spin.Překladače LTL na Büchiho automaty jsou obvykle optimalizovány tak, aby produkovaly automaty s co nejmenším počtem stavů, či s co nejmenším počtem nedeterministických stavů. V této publikaci hledáme vlastnosti Büchiho automatů, které skutečně ovlivňují výkon nástrojů pro explicitní metodu ověřování modelu (model checking). A to pomocí manuální analýzy několika automatů a experimenty s běžnými překladače LTL na automaty a realistickými verifikačními úlohami. Výsledkem těchto experimentů je lepší porozumění charakteristik automatů, které jsou dobré pro model checker Spin
The Hanoi Omega-Automata Format
We propose a flexible exchange format for ω-automata, as typically used in formal verification, and implement support for it in a range of established tools. Our aim is to simplify the interaction of tools, helping the research community to build upon other people’s work. A key feature of the format is the use of very generic acceptance conditions, specified by Boolean combinations of acceptance primitives, rather than being limited to common cases such as Büchi, Streett, or Rabin. Such flexibility in the choice of acceptance conditions can be exploited in applications, for example in probabilistic model checking, and furthermore encourages the development of acceptance-agnostic tools for automata manipulations. The format allows acceptance conditions that are either state-based or transition-based, and also supports alternating automata
Presentation of the 9th Edition of the Model Checking Contest.
International audience; The Model Checking Contest (MCC) is an annual competition of software tools for model checking. Tools must process an increasing benchmark gathered from the whole community and may participate in various examinations: state space generation, computation of global properties, computation of some upper bounds in the model, evaluation of reachability formulas, evaluation of CTL formulas, and evaluation of LTL formulas.For each examination and each model instance, participating tools are provided with up to 3600 s and 16 gigabyte of memory. Then, tool answers are analyzed and confronted to the results produced by other competing tools to detect diverging answers (which are quite rare at this stage of the competition, and lead to penalties).For each examination, golden, silver, and bronze medals are attributed to the three best tools. CPU usage and memory consumption are reported, which is also valuable information for tool developers
A weakness measure for GR(1) formulae
In spite of the theoretical and algorithmic developments for system synthesis in recent years, little effort has been dedicated to quantifying the quality of the specifications used for synthesis. When dealing with unrealizable specifications, finding the weakest environment assumptions that would ensure realizability is typically a desirable property; in such context the weakness of the assumptions is a major quality parameter. The question of whether one assumption is weaker than another is commonly interpreted using implication or, equivalently, language inclusion. However, this interpretation does not provide any further insight into the weakness of assumptions when implication does not hold. To our knowledge, the only measure that is capable of comparing two formulae in this case is entropy, but even it fails to provide a sufficiently refined notion of weakness in case of GR(1) formulae, a subset of linear temporal logic formulae which is of particular interest in controller synthesis. In this paper we propose a more refined measure of weakness based on the Hausdorff dimension, a concept that captures the notion of size of the omega-language satisfying a linear temporal logic formula. We identify the conditions under which this measure is guaranteed to distinguish between weaker and stronger GR(1) formulae. We evaluate our proposed weakness measure in the context of computing GR(1) assumptions refinements
Semantic Labelling and Learning for Parity Game Solving in LTL Synthesis
We propose "semantic labelling" as a novel ingredient for solving games in
the context of LTL synthesis. It exploits recent advances in the automata-based
approach, yielding more information for each state of the generated parity game
than the game graph can capture. We utilize this extra information to improve
standard approaches as follows. (i) Compared to strategy improvement (SI) with
random initial strategy, a more informed initialization often yields a winning
strategy directly without any computation. (ii) This initialization makes SI
also yield smaller solutions. (iii) While Q-learning on the game graph turns
out not too efficient, Q-learning with the semantic information becomes
competitive to SI. Since already the simplest heuristics achieve significant
improvements the experimental results demonstrate the utility of semantic
labelling. This extra information opens the door to more advanced learning
approaches both for initialization and improvement of strategies
Component-wise incremental LTL model checking
Efficient symbolic and explicit-state model checking
approaches have been developed for the verification of linear
time temporal
logic (LTL) properties. Several attempts have been made to
combine the advantages of the various algorithms. Model
checking LTL
properties usually poses two challenges: one must compute the
synchronous product of the state space and the automaton
model of the
desired property, then look for counterexamples that is
reduced to finding strongly connected components (SCCs) in
the state space
of the product. In case of concurrent systems, where the
phenomenon of state space explosion often prevents the
successful
verification, the so-called saturation algorithm has proved
its efficiency in state space exploration. This paper
proposes a new
approach that leverages the saturation algorithm both as an
iteration strategy constructing the product directly, as well
as in a
new fixed-point computation algorithm to find strongly
connected components on-the-fly by incrementally processing
the components
of the model. Complementing the search for SCCs, explicit
techniques and component-wise abstractions are used to prove
the absence
of counterexamples. The resulting on-the-fly, incremental LTL
model checking algorithm proved to scale well with the size
of
models, as the evaluation on models of the Model Checking
Contest suggests
Experimental Aspects of Synthesis
We discuss the problem of experimentally evaluating linear-time temporal
logic (LTL) synthesis tools for reactive systems. We first survey previous such
work for the currently publicly available synthesis tools, and then draw
conclusions by deriving useful schemes for future such evaluations.
In particular, we explain why previous tools have incompatible scopes and
semantics and provide a framework that reduces the impact of this problem for
future experimental comparisons of such tools. Furthermore, we discuss which
difficulties the complex workflows that begin to appear in modern synthesis
tools induce on experimental evaluations and give answers to the question how
convincing such evaluations can still be performed in such a setting.Comment: In Proceedings iWIGP 2011, arXiv:1102.374