12,485 research outputs found

    Minimizing finite automata is computationally hard

    Get PDF
    It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems

    Visualizing Why Nondeterministic Finite-State Automata Reject

    Full text link
    Students find their first course in Formal Languages and Automata Theory challenging. In addition to the development of formal arguments, most students struggle to understand nondeterministic computation models. In part, the struggle stems from the course exposing them for the first time to nondeterminism. Often, students find it difficult to understand why a nondeterministic machine accepts or rejects a word. Furthermore, they may feel uncomfortable with there being multiple computations on the same input and with a machine not consuming all of its input. This article describes a visualization tool developed to help students understand nondeterministic behavior. The tool is integrated into, FSM, a domain-specific language for the Automata Theory classroom. The strategy is based on the automatic generation of computation graphs given a machine and an input word. Unlike previous visualization tools, the computation graphs generated reflect the structure of the given machine's transition relation and not the structure of the computation tree.Comment: Presented at The 2023 Scheme and Functional Programming Workshop (arXiv:cs/0101200

    Pebbling and Branching Programs Solving the Tree Evaluation Problem

    Full text link
    We study restricted computation models related to the Tree Evaluation Problem}. The TEP was introduced in earlier work as a simple candidate for the (*very*) long term goal of separating L and LogDCFL. The input to the problem is a rooted, balanced binary tree of height h, whose internal nodes are labeled with binary functions on [k] = {1,...,k} (each given simply as a list of k^2 elements of [k]), and whose leaves are labeled with elements of [k]. Each node obtains a value in [k] equal to its binary function applied to the values of its children, and the output is the value of the root. The first restricted computation model, called Fractional Pebbling, is a generalization of the black/white pebbling game on graphs, and arises in a natural way from the search for good upper bounds on the size of nondeterministic branching programs (BPs) solving the TEP - for any fixed h, if the binary tree of height h has fractional pebbling cost at most p, then there are nondeterministic BPs of size O(k^p) solving the height h TEP. We prove a lower bound on the fractional pebbling cost of d-ary trees that is tight to within an additive constant for each fixed d. The second restricted computation model we study is a semantic restriction on (non)deterministic BPs solving the TEP - Thrifty BPs. Deterministic (resp. nondeterministic) thrifty BPs suffice to implement the best known algorithms for the TEP, based on black (resp. fractional) pebbling. In earlier work, for each fixed h a lower bound on the size of deterministic thrifty BPs was proved that is tight for sufficiently large k. We give an alternative proof that achieves the same bound for all k. We show the same bound still holds in a less-restricted model, and also that gradually weaker lower bounds can be obtained for gradually weaker restrictions on the model.Comment: Written as one of the requirements for my MSc. 29 pages, 6 figure

    Computation in Finitary Stochastic and Quantum Processes

    Full text link
    We introduce stochastic and quantum finite-state transducers as computation-theoretic models of classical stochastic and quantum finitary processes. Formal process languages, representing the distribution over a process's behaviors, are recognized and generated by suitable specializations. We characterize and compare deterministic and nondeterministic versions, summarizing their relative computational power in a hierarchy of finitary process languages. Quantum finite-state transducers and generators are a first step toward a computation-theoretic analysis of individual, repeatedly measured quantum dynamical systems. They are explored via several physical systems, including an iterated beam splitter, an atom in a magnetic field, and atoms in an ion trap--a special case of which implements the Deutsch quantum algorithm. We show that these systems' behaviors, and so their information processing capacity, depends sensitively on the measurement protocol.Comment: 25 pages, 16 figures, 1 table; http://cse.ucdavis.edu/~cmg; numerous corrections and update

    A uniform framework for modelling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalences

    Get PDF
    Labeled transition systems are typically used as behavioral models of concurrent processes, and the labeled transitions define the a one-step state-to-state reachability relation. This model can be made generalized by modifying the transition relation to associate a state reachability distribution, rather than a single target state, with any pair of source state and transition label. The state reachability distribution becomes a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and of nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. These can be defined on ULTraS by relying on appropriate measure functions that expresses the degree of reachability of a set of states when performing single-step or multi-step computations. It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models

    Finding Optimal Flows Efficiently

    Full text link
    Among the models of quantum computation, the One-way Quantum Computer is one of the most promising proposals of physical realization, and opens new perspectives for parallelization by taking advantage of quantum entanglement. Since a one-way quantum computation is based on quantum measurement, which is a fundamentally nondeterministic evolution, a sufficient condition of global determinism has been introduced as the existence of a causal flow in a graph that underlies the computation. A O(n^3)-algorithm has been introduced for finding such a causal flow when the numbers of output and input vertices in the graph are equal, otherwise no polynomial time algorithm was known for deciding whether a graph has a causal flow or not. Our main contribution is to introduce a O(n^2)-algorithm for finding a causal flow, if any, whatever the numbers of input and output vertices are. This answers the open question stated by Danos and Kashefi and by de Beaudrap. Moreover, we prove that our algorithm produces an optimal flow (flow of minimal depth.) Whereas the existence of a causal flow is a sufficient condition for determinism, it is not a necessary condition. A weaker version of the causal flow, called gflow (generalized flow) has been introduced and has been proved to be a necessary and sufficient condition for a family of deterministic computations. Moreover the depth of the quantum computation is upper bounded by the depth of the gflow. However, the existence of a polynomial time algorithm that finds a gflow has been stated as an open question. In this paper we answer this positively with a polynomial time algorithm that outputs an optimal gflow of a given graph and thus finds an optimal correction strategy to the nondeterministic evolution due to measurements.Comment: 10 pages, 3 figure

    Computing with cells: membrane systems - some complexity issues.

    Full text link
    Membrane computing is a branch of natural computing which abstracts computing models from the structure and the functioning of the living cell. The main ingredients of membrane systems, called P systems, are (i) the membrane structure, which consists of a hierarchical arrangements of membranes which delimit compartments where (ii) multisets of symbols, called objects, evolve according to (iii) sets of rules which are localised and associated with compartments. By using the rules in a nondeterministic/deterministic maximally parallel manner, transitions between the system configurations can be obtained. A sequence of transitions is a computation of how the system is evolving. Various ways of controlling the transfer of objects from one membrane to another and applying the rules, as well as possibilities to dissolve, divide or create membranes have been studied. Membrane systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems once future biotechnology gives way to a practical bio-realization. In this paper we survey some interesting and fundamental complexity issues such as universality vs. nonuniversality, determinism vs. nondeterminism, membrane and alphabet size hierarchies, characterizations of context-sensitive languages and other language classes and various notions of parallelism
    corecore