42 research outputs found

    On finitely ambiguous B\"uchi automata

    Full text link
    Unambiguous B\"uchi automata, i.e. B\"uchi automata allowing only one accepting run per word, are a useful restriction of B\"uchi automata that is well-suited for probabilistic model-checking. In this paper we propose a more permissive variant, namely finitely ambiguous B\"uchi automata, a generalisation where each word has at most kk accepting runs, for some fixed kk. We adapt existing notions and results concerning finite and bounded ambiguity of finite automata to the setting of ω\omega-languages and present a translation from arbitrary nondeterministic B\"uchi automata with nn states to finitely ambiguous automata with at most 3n3^n states and at most nn accepting runs per word

    Ambiguity and Communication

    Get PDF
    The ambiguity of a nondeterministic finite automaton (NFA) N for input size n is the maximal number of accepting computations of N for an input of size n. For all k, r 2 N we construct languages Lr,k which can be recognized by NFA's with size k poly(r) and ambiguity O(nk), but Lr,k has only NFA's with exponential size, if ambiguity o(nk) is required. In particular, a hierarchy for polynomial ambiguity is obtained, solving a long standing open problem (Ravikumar and Ibarra, 1989, Leung, 1998)

    Descriptional Complexity of Finite Automata -- Selected Highlights

    Full text link
    The state complexity, respectively, nondeterministic state complexity of a regular language LL is the number of states of the minimal deterministic, respectively, of a minimal nondeterministic finite automaton for LL. Some of the most studied state complexity questions deal with size comparisons of nondeterministic finite automata of differing degree of ambiguity. More generally, if for a regular language we compare the size of description by a finite automaton and by a more powerful language definition mechanism, such as a context-free grammar, we encounter non-recursive trade-offs. Operational state complexity studies the state complexity of the language resulting from a regularity preserving operation as a function of the complexity of the argument languages. Determining the state complexity of combined operations is generally challenging and for general combinations of operations that include intersection and marked concatenation it is uncomputable

    Ambiguity, Nondeterminism and State Complexity of Finite Automata

    Full text link

    Ambiguity, nondeterminism and state complexity of finite automata

    Get PDF
    The degree of ambiguity counts the number of accepting computations of a nondeterministic finite automaton (NFA) on a given input. Alternatively, the nondeterminism of an NFA can be measured by counting the amount of guessing in a single computation or the number of leaves of the computation tree on a given input. This paper surveys work on the degree of ambiguity and on various nondeterminism measures for finite automata. In particular, we focus on state complexity comparisons between NFAs with quantified ambiguity or nondeterminism

    Computing the Width of Non-deterministic Automata

    Get PDF
    International audienceWe introduce a measure called width, quantifying the amount of nondetermin-ism in automata. Width generalises the notion of good-for-games (GFG) automata, that correspond to NFAs of width 1, and where an accepting run can be built on-the-fly on any accepted input. We describe an incremental determinisation construction on NFAs, which can be more efficient than the full powerset determinisation, depending on the width of the input NFA. This construction can be generalised to infinite words, and is particularly well-suited to coBüchi automata. For coBüchi automata, this procedure can be used to compute either a deterministic automaton or a GFG one, and it is algorithmically more efficient in the last case. We show this fact by proving that checking whether a coBüchi automaton is determinisable by pruning is NP-complete. On finite or infinite words, we show that computing the width of an automaton is EXPTIME-complete. This implies EXPTIME-completeness for multipebble simulation games on NFAs

    Width of Non-deterministic Automata

    Get PDF
    International audienceWe introduce a measure called width, quantifying the amount of nondeterminism in automata. Width generalises the notion of good-for-games (GFG) automata, that correspond to NFAs of width 1, and where an accepting run can be built on-the-fly on any accepted input. We describe an incremental determinisation construction on NFAs, which can be more efficient than the full powerset determinisation, depending on the width of the input NFA. This construction can be generalised to infinite words, and is particularly well-suited to coBüchi automata in this context. For coBüchi automata, this procedure can be used to compute either a deterministic automaton or a GFG one, and it is algorithmically more efficient in this last case. We show this fact by proving that checking whether a coBüchi automaton is determinisable by pruning is NP-complete. On finite or infinite words, we show that computing the width of an automaton is PSPACE-hard. 1 Introduction Determinisation of non-deterministic automata (NFAs) is one of the cornerstone problems of automata theory, with countless applications in verification. There is a very active field of research for optimizing or approximating determinisation, or circumventing it in contexts like inclusion of NFA or Church Synthesis. Indeed, determinisation is a costly operation, as the state space blow-up is in O(2 n) on finite words, O(3 n) for coBüchi automata [16], and 2 O(n log(n)) for Büchi automata [17]. If A and B are NFAs, the classical way of checking the inclusion L(A) ⊆ L(B) is to determinise B, complement it, and test emptiness of L(A) ∩ L(B). To circumvent a full determinisation, the recent algorithm from [3] proved to be very efficient, as it is likely to explore only a part of the powerset construction. Other approaches use simulation games to approximate inclusion at a cheaper cost, see for instance [8]. Another approach consists in replacing determinism by a weaker constraint that suffices in some particular context. In this spirit, Good-for-Games automata (GFG for short) were introduced in [9], as a way to solve the Church synthesis problem. This problem asks, given a specification L, typically given by an LTL formula, over an alphabet of inputs and outputs, whether there is a reactive system (transducer) whose behaviour is included in L. The classical solution computes a deterministic automaton for L, and solves a game defined on this automaton. It turns out that replacing determinism by the weaker constraint of being GFG is sufficient in this context. Intuitively, GFG automata are non-deterministic * This work was supported by the grant PALSE Impulsion

    From Finite Automata to Regular Expressions and Back--A Summary on Descriptional Complexity

    Full text link
    The equivalence of finite automata and regular expressions dates back to the seminal paper of Kleene on events in nerve nets and finite automata from 1956. In the present paper we tour a fragment of the literature and summarize results on upper and lower bounds on the conversion of finite automata to regular expressions and vice versa. We also briefly recall the known bounds for the removal of spontaneous transitions (epsilon-transitions) on non-epsilon-free nondeterministic devices. Moreover, we report on recent results on the average case descriptional complexity bounds for the conversion of regular expressions to finite automata and brand new developments on the state elimination algorithm that converts finite automata to regular expressions.Comment: In Proceedings AFL 2014, arXiv:1405.527

    Homomorphic Encryption for Finite Automata

    Get PDF
    We describe a somewhat homomorphic GSW-like encryption scheme, natively encrypting matrices rather than just single elements. This scheme offers much better performance than existing homomorphic encryption schemes for evaluating encrypted (nondeterministic) finite automata (NFAs). Differently from GSW, we do not know how to reduce the security of this scheme to LWE, instead we reduce it to a stronger assumption, that can be thought of as an inhomogeneous variant of the NTRU assumption. This assumption (that we term iNTRU) may be useful and interesting in its own right, and we examine a few of its properties. We also examine methods to encode regular expressions as NFAs, and in particular explore a new optimization problem, motivated by our application to encrypted NFA evaluation. In this problem, we seek to minimize the number of states in an NFA for a given expression, subject to the constraint on the ambiguity of the NFA
    corecore