215 research outputs found

    One Theorem to Rule Them All: A Unified Translation of LTL into {\omega}-Automata

    Full text link
    We present a unified translation of LTL formulas into deterministic Rabin automata, limit-deterministic B\"uchi automata, and nondeterministic B\"uchi automata. The translations yield automata of asymptotically optimal size (double or single exponential, respectively). All three translations are derived from one single Master Theorem of purely logical nature. The Master Theorem decomposes the language of a formula into a positive boolean combination of languages that can be translated into {\omega}-automata by elementary means. In particular, Safra's, ranking, and breakpoint constructions used in other translations are not needed

    From LTL and Limit-Deterministic B\"uchi Automata to Deterministic Parity Automata

    Full text link
    Controller synthesis for general linear temporal logic (LTL) objectives is a challenging task. The standard approach involves translating the LTL objective into a deterministic parity automaton (DPA) by means of the Safra-Piterman construction. One of the challenges is the size of the DPA, which often grows very fast in practice, and can reach double exponential size in the length of the LTL formula. In this paper we describe a single exponential translation from limit-deterministic B\"uchi automata (LDBA) to DPA, and show that it can be concatenated with a recent efficient translation from LTL to LDBA to yield a double exponential, \enquote{Safraless} LTL-to-DPA construction. We also report on an implementation, a comparison with the SPOT library, and performance on several sets of formulas, including instances from the 2016 SyntComp competition

    Certified Reinforcement Learning with Logic Guidance

    Full text link
    This paper proposes the first model-free Reinforcement Learning (RL) framework to synthesise policies for unknown, and continuous-state Markov Decision Processes (MDPs), such that a given linear temporal property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), namely a finite-state machine expressing the property. Exploiting the structure of the LDBA, we shape a synchronous reward function on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces that probabilistically satisfy the linear temporal property. This probability (certificate) is also calculated in parallel with policy learning when the state space of the MDP is finite: as such, the RL algorithm produces a policy that is certified with respect to the property. Under the assumption of finite state space, theoretical guarantees are provided on the convergence of the RL algorithm to an optimal policy, maximising the above probability. We also show that our method produces ''best available'' control policies when the logical property cannot be satisfied. In the general case of a continuous state space, we propose a neural network architecture for RL and we empirically show that the algorithm finds satisfying policies, if there exist such policies. The performance of the proposed framework is evaluated via a set of numerical examples and benchmarks, where we observe an improvement of one order of magnitude in the number of iterations required for the policy synthesis, compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782

    Complexity of Model Checking MDPs against LTL Specifications

    Get PDF
    Given a Markov Decision Process (MDP) M, an LTL formula varphi, and a threshold theta in [0,1], the verification question is to determine if there is a scheduler with respect to which the executions of M satisfying varphi have probability greater than (or greater than or equal to) theta. When theta = 0, we call it the qualitative verification problem, and when theta in (0,1], we call it the quantitative verification problem. In this paper we study the precise complexity of these problems when the specification is constrained to be in different fragments of LTL

    Optimal Translation of LTL to Limit Deterministic Automata

    Get PDF
    A crucial step in model checking Markov Decision Processes (MDP) is to translate the LTL specification into automata. Efforts have been made in improving deterministic automata construction for LTL but such translations are double exponential in the worst case. For model checking MDPs though limit deterministic automata suffice. Recently it was shown how to translate the fragment LTL\GU to exponential sized limit deterministic automata which speeds up the model checking problem by an exponential factor for that fragment. In this paper we show how to construct limit deterministic automata for full LTL. This translation is not only efficient for LTL\GU but for a larger fragment LTL_D which is provably more expressive. We show experimental results demonstrating that our construction yields smaller automata when compared to state of the art techniques that translate LTL to deterministic and limit deterministic automata.NSF grants CNS 1314485 and CCF 1422798Ope

    Verification of linear-time properties for finite probabilistic systems

    Get PDF
    With computers becoming ubiquitous there is an ever growing necessity to ensure that they are programmed to behave correctly. Formal verification is a discipline within computer science that tackles issues related to design and analysis of programs with the aim of producing well behaved systems. One of the core problems in this domain is what is called the model checking problem: given a mathematical model of a computer and a correctness specification, does the model satisfy the specification? In this thesis we explore this question for Markov Decision Processes (MDPs), which are finite state models involving stochastic and non-deterministic behaviour over discrete time steps. The kind of specifications we focus on are those that describe the correctness of individual executions of the model, called linear time properties. We delve into two different semantics for assigning meaning to the model checking problem: execution based semantics and distribution based semantics. In the execution based semantics we look at specifications described using Linear Temporal Logic (LTL). The model checking problem under this semantics are of two kinds: qualitative and quantitative. In the qualitative version we are interested in finding out if the specification is satisfied with non-zero probability, and in the more general quantitative version we want to know whether the probability of satisfaction is greater than a given quantity. The standard way to do model checking for both cases involves translating the LTL formula into an automaton which is then used to analyze the given MDP. One of the contributions of this thesis is a new translation of LTL to automata that are provably smaller than previously known ones. This translation helps us in reducing the asymptotic complexity of qualitative model checking of MDPs against certain fragments of LTL. We implement this translation in a tool called Büchifier to show its benefits on real examples. Our second main contribution involves a new automata-based algorithm for quantitative model checking that along with known translations of LTL to automata, which gives us new complexity results for the problem for different fragments of LTL. In the distribution based semantics we view MDPs as producing a sequence of probability distributions over its state space. At each point of time we are interested in the truth of atomic propositions, each of which tells us whether the probability of being in a certain states is above or below a given threshold. Linear time properties over these propositions are then used to describe correctness criteria. The model checking problem here happens to be undecidable in general, and therefore we consider restrictions on the problem. First we consider propositions which are robust: a proposition is said to be robust when the probability of being in its associated set of states is always well separated from its given threshold. For properties described over such propositions we observe that the model checking problem becomes decidable. But checking for robustness itself is an undecidable problem for MDPs. So we focus our attention on a subclass of MDPs called Markov Chains which exhibit stochastic behaviour without non-determinism. For Markov Chains we show that checking for robustness and model checking under robustness become tractable and we provide an analysis of the computational complexity of these problems
    • …
    corecore