8,323 research outputs found

    Width of Non-deterministic Automata

    Get PDF
    International audienceWe introduce a measure called width, quantifying the amount of nondeterminism in automata. Width generalises the notion of good-for-games (GFG) automata, that correspond to NFAs of width 1, and where an accepting run can be built on-the-fly on any accepted input. We describe an incremental determinisation construction on NFAs, which can be more efficient than the full powerset determinisation, depending on the width of the input NFA. This construction can be generalised to infinite words, and is particularly well-suited to coBüchi automata in this context. For coBüchi automata, this procedure can be used to compute either a deterministic automaton or a GFG one, and it is algorithmically more efficient in this last case. We show this fact by proving that checking whether a coBüchi automaton is determinisable by pruning is NP-complete. On finite or infinite words, we show that computing the width of an automaton is PSPACE-hard. 1 Introduction Determinisation of non-deterministic automata (NFAs) is one of the cornerstone problems of automata theory, with countless applications in verification. There is a very active field of research for optimizing or approximating determinisation, or circumventing it in contexts like inclusion of NFA or Church Synthesis. Indeed, determinisation is a costly operation, as the state space blow-up is in O(2 n) on finite words, O(3 n) for coBüchi automata [16], and 2 O(n log(n)) for Büchi automata [17]. If A and B are NFAs, the classical way of checking the inclusion L(A) ⊆ L(B) is to determinise B, complement it, and test emptiness of L(A) ∩ L(B). To circumvent a full determinisation, the recent algorithm from [3] proved to be very efficient, as it is likely to explore only a part of the powerset construction. Other approaches use simulation games to approximate inclusion at a cheaper cost, see for instance [8]. Another approach consists in replacing determinism by a weaker constraint that suffices in some particular context. In this spirit, Good-for-Games automata (GFG for short) were introduced in [9], as a way to solve the Church synthesis problem. This problem asks, given a specification L, typically given by an LTL formula, over an alphabet of inputs and outputs, whether there is a reactive system (transducer) whose behaviour is included in L. The classical solution computes a deterministic automaton for L, and solves a game defined on this automaton. It turns out that replacing determinism by the weaker constraint of being GFG is sufficient in this context. Intuitively, GFG automata are non-deterministic * This work was supported by the grant PALSE Impulsion

    Computing the Width of Non-deterministic Automata

    Get PDF
    International audienceWe introduce a measure called width, quantifying the amount of nondetermin-ism in automata. Width generalises the notion of good-for-games (GFG) automata, that correspond to NFAs of width 1, and where an accepting run can be built on-the-fly on any accepted input. We describe an incremental determinisation construction on NFAs, which can be more efficient than the full powerset determinisation, depending on the width of the input NFA. This construction can be generalised to infinite words, and is particularly well-suited to coBüchi automata. For coBüchi automata, this procedure can be used to compute either a deterministic automaton or a GFG one, and it is algorithmically more efficient in the last case. We show this fact by proving that checking whether a coBüchi automaton is determinisable by pruning is NP-complete. On finite or infinite words, we show that computing the width of an automaton is EXPTIME-complete. This implies EXPTIME-completeness for multipebble simulation games on NFAs

    Computing the Width of Non-deterministic Automata

    Get PDF
    We introduce a measure called width, quantifying the amount of nondeterminism in automata. Width generalises the notion of good-for-games (GFG) automata, that correspond to NFAs of width 1, and where an accepting run can be built on-the-fly on any accepted input. We describe an incremental determinisation construction on NFAs, which can be more efficient than the full powerset determinisation, depending on the width of the input NFA. This construction can be generalised to infinite words, and is particularly well-suited to coB\"uchi automata. For coB\"uchi automata, this procedure can be used to compute either a deterministic automaton or a GFG one, and it is algorithmically more efficient in the last case. We show this fact by proving that checking whether a coB\"uchi automaton is determinisable by pruning is NP-complete. On finite or infinite words, we show that computing the width of an automaton is EXPTIME-complete. This implies EXPTIME-completeness for multipebble simulation games on NFAs

    Quantitative Automata under Probabilistic Semantics

    Full text link
    Automata with monitor counters, where the transitions do not depend on counter values, and nested weighted automata are two expressive automata-theoretic frameworks for quantitative properties. For a well-studied and wide class of quantitative functions, we establish that automata with monitor counters and nested weighted automata are equivalent. We study for the first time such quantitative automata under probabilistic semantics. We show that several problems that are undecidable for the classical questions of emptiness and universality become decidable under the probabilistic semantics. We present a complete picture of decidability for such automata, and even an almost-complete picture of computational complexity, for the probabilistic questions we consider

    Turing machines based on unsharp quantum logic

    Full text link
    In this paper, we consider Turing machines based on unsharp quantum logic. For a lattice-ordered quantum multiple-valued (MV) algebra E, we introduce E-valued non-deterministic Turing machines (ENTMs) and E-valued deterministic Turing machines (EDTMs). We discuss different E-valued recursively enumerable languages from width-first and depth-first recognition. We find that width-first recognition is equal to or less than depth-first recognition in general. The equivalence requires an underlying E value lattice to degenerate into an MV algebra. We also study variants of ENTMs. ENTMs with a classical initial state and ENTMs with a classical final state have the same power as ENTMs with quantum initial and final states. In particular, the latter can be simulated by ENTMs with classical transitions under a certain condition. Using these findings, we prove that ENTMs are not equivalent to EDTMs and that ENTMs are more powerful than EDTMs. This is a notable difference from the classical Turing machines.Comment: In Proceedings QPL 2011, arXiv:1210.029

    Lazy Probabilistic Model Checking without Determinisation

    Get PDF
    The bottleneck in the quantitative analysis of Markov chains and Markov decision processes against specifications given in LTL or as some form of nondeterministic B\"uchi automata is the inclusion of a determinisation step of the automaton under consideration. In this paper, we show that full determinisation can be avoided: subset and breakpoint constructions suffice. We have implemented our approach---both explicit and symbolic versions---in a prototype tool. Our experiments show that our prototype can compete with mature tools like PRISM.Comment: 38 pages. Updated version for introducing the following changes: - general improvement on paper presentation; - extension of the approach to avoid full determinisation; - added proofs for such an extension; - added case studies; - updated old case studies to reflect the added extensio

    From Finite Automata to Regular Expressions and Back--A Summary on Descriptional Complexity

    Full text link
    The equivalence of finite automata and regular expressions dates back to the seminal paper of Kleene on events in nerve nets and finite automata from 1956. In the present paper we tour a fragment of the literature and summarize results on upper and lower bounds on the conversion of finite automata to regular expressions and vice versa. We also briefly recall the known bounds for the removal of spontaneous transitions (epsilon-transitions) on non-epsilon-free nondeterministic devices. Moreover, we report on recent results on the average case descriptional complexity bounds for the conversion of regular expressions to finite automata and brand new developments on the state elimination algorithm that converts finite automata to regular expressions.Comment: In Proceedings AFL 2014, arXiv:1405.527
    corecore