2,367 research outputs found

    Minimizing finite automata is computationally hard

    Get PDF
    It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems

    Two-Way Automata Making Choices Only at the Endmarkers

    Full text link
    The question of the state-size cost for simulation of two-way nondeterministic automata (2NFAs) by two-way deterministic automata (2DFAs) was raised in 1978 and, despite many attempts, it is still open. Subsequently, the problem was attacked by restricting the power of 2DFAs (e.g., using a restricted input head movement) to the degree for which it was already possible to derive some exponential gaps between the weaker model and the standard 2NFAs. Here we use an opposite approach, increasing the power of 2DFAs to the degree for which it is still possible to obtain a subexponential conversion from the stronger model to the standard 2DFAs. In particular, it turns out that subexponential conversion is possible for two-way automata that make nondeterministic choices only when the input head scans one of the input tape endmarkers. However, there is no restriction on the input head movement. This implies that an exponential gap between 2NFAs and 2DFAs can be obtained only for unrestricted 2NFAs using capabilities beyond the proposed new model. As an additional bonus, conversion into a machine for the complement of the original language is polynomial in this model. The same holds for making such machines self-verifying, halting, or unambiguous. Finally, any superpolynomial lower bound for the simulation of such machines by standard 2DFAs would imply LNL. In the same way, the alternating version of these machines is related to L =? NL =? P, the classical computational complexity problems.Comment: 23 page

    Translation from Classical Two-Way Automata to Pebble Two-Way Automata

    Get PDF
    We study the relation between the standard two-way automata and more powerful devices, namely, two-way finite automata with an additional "pebble" movable along the input tape. Similarly as in the case of the classical two-way machines, it is not known whether there exists a polynomial trade-off, in the number of states, between the nondeterministic and deterministic pebble two-way automata. However, we show that these two machine models are not independent: if there exists a polynomial trade-off for the classical two-way automata, then there must also exist a polynomial trade-off for the pebble two-way automata. Thus, we have an upward collapse (or a downward separation) from the classical two-way automata to more powerful pebble automata, still staying within the class of regular languages. The same upward collapse holds for complementation of nondeterministic two-way machines. These results are obtained by showing that each pebble machine can be, by using suitable inputs, simulated by a classical two-way automaton with a linear number of states (and vice versa), despite the existing exponential blow-up between the classical and pebble two-way machines

    LTLf and LDLf Synthesis under Partial Observability

    Get PDF
    In this paper, we study synthesis under partial observability for logical specifications over finite traces expressed in LTLf/LDLf. This form of synthesis can be seen as a generalization of planning under partial observability in nondeterministic domains, which is known to be 2EXPTIME-complete. We start by showing that the usual "belief-state construction" used in planning under partial observability works also for general LTLf/LDLf synthesis, though with a jump in computational complexity from 2EXPTIME to 3EXPTIME. Then we show that the belief-state construction can be avoided in favor of a direct automata construction which exploits projection to hide unobservable propositions. This allow us to prove that the problem remains 2EXPTIME-complete. The new synthesis technique proposed is effective and readily implementable

    Can Nondeterminism Help Complementation?

    Full text link
    Complementation and determinization are two fundamental notions in automata theory. The close relationship between the two has been well observed in the literature. In the case of nondeterministic finite automata on finite words (NFA), complementation and determinization have the same state complexity, namely Theta(2^n) where n is the state size. The same similarity between determinization and complementation was found for Buchi automata, where both operations were shown to have 2^\Theta(n lg n) state complexity. An intriguing question is whether there exists a type of omega-automata whose determinization is considerably harder than its complementation. In this paper, we show that for all common types of omega-automata, the determinization problem has the same state complexity as the corresponding complementation problem at the granularity of 2^\Theta(.).Comment: In Proceedings GandALF 2012, arXiv:1210.202

    MeGARA: Menu-based Game Abstraction and Abstraction Refinement of Markov Automata

    Full text link
    Markov automata combine continuous time, probabilistic transitions, and nondeterminism in a single model. They represent an important and powerful way to model a wide range of complex real-life systems. However, such models tend to be large and difficult to handle, making abstraction and abstraction refinement necessary. In this paper we present an abstraction and abstraction refinement technique for Markov automata, based on the game-based and menu-based abstraction of probabilistic automata. First experiments show that a significant reduction in size is possible using abstraction.Comment: In Proceedings QAPL 2014, arXiv:1406.156

    Multi-Head Finite Automata: Characterizations, Concepts and Open Problems

    Full text link
    Multi-head finite automata were introduced in (Rabin, 1964) and (Rosenberg, 1966). Since that time, a vast literature on computational and descriptional complexity issues on multi-head finite automata documenting the importance of these devices has been developed. Although multi-head finite automata are a simple concept, their computational behavior can be already very complex and leads to undecidable or even non-semi-decidable problems on these devices such as, for example, emptiness, finiteness, universality, equivalence, etc. These strong negative results trigger the study of subclasses and alternative characterizations of multi-head finite automata for a better understanding of the nature of non-recursive trade-offs and, thus, the borderline between decidable and undecidable problems. In the present paper, we tour a fragment of this literature

    From Finite Automata to Regular Expressions and Back--A Summary on Descriptional Complexity

    Full text link
    The equivalence of finite automata and regular expressions dates back to the seminal paper of Kleene on events in nerve nets and finite automata from 1956. In the present paper we tour a fragment of the literature and summarize results on upper and lower bounds on the conversion of finite automata to regular expressions and vice versa. We also briefly recall the known bounds for the removal of spontaneous transitions (epsilon-transitions) on non-epsilon-free nondeterministic devices. Moreover, we report on recent results on the average case descriptional complexity bounds for the conversion of regular expressions to finite automata and brand new developments on the state elimination algorithm that converts finite automata to regular expressions.Comment: In Proceedings AFL 2014, arXiv:1405.527
    • …
    corecore