1,973 research outputs found

    New results on classical and quantum counter automata

    Full text link
    We show that one-way quantum one-counter automaton with zero-error is more powerful than its probabilistic counterpart on promise problems. Then, we obtain a similar separation result between Las Vegas one-way probabilistic one-counter automaton and one-way deterministic one-counter automaton. We also obtain new results on classical counter automata regarding language recognition. It was conjectured that one-way probabilistic one blind-counter automata cannot recognize Kleene closure of equality language [A. Yakaryilmaz: Superiority of one-way and realtime quantum machines. RAIRO - Theor. Inf. and Applic. 46(4): 615-641 (2012)]. We show that this conjecture is false, and also show several separation results for blind/non-blind counter automata.Comment: 21 page

    Unary probabilistic and quantum automata on promise problems

    Full text link
    We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error QFAs are more powerful than PFAs. But, in contrary to the binary problems, the computational powers of Las-Vegas QFAs and bounded-error PFAs are equivalent to deterministic finite automata (DFAs). Lastly, we present a new family of unary promise problems with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.Comment: Minor correction

    Implications of quantum automata for contextuality

    Full text link
    We construct zero-error quantum finite automata (QFAs) for promise problems which cannot be solved by bounded-error probabilistic finite automata (PFAs). Here is a summary of our results: - There is a promise problem solvable by an exact two-way QFA in exponential expected time, but not by any bounded-error sublogarithmic space probabilistic Turing machine (PTM). - There is a promise problem solvable by an exact two-way QFA in quadratic expected time, but not by any bounded-error o(loglogn) o(\log \log n) -space PTMs in polynomial expected time. The same problem can be solvable by a one-way Las Vegas (or exact two-way) QFA with quantum head in linear (expected) time. - There is a promise problem solvable by a Las Vegas realtime QFA, but not by any bounded-error realtime PFA. The same problem can be solvable by an exact two-way QFA in linear expected time but not by any exact two-way PFA. - There is a family of promise problems such that each promise problem can be solvable by a two-state exact realtime QFAs, but, there is no such bound on the number of states of realtime bounded-error PFAs solving the members this family. Our results imply that there exist zero-error quantum computational devices with a \emph{single qubit} of memory that cannot be simulated by any finite memory classical computational model. This provides a computational perspective on results regarding ontological theories of quantum mechanics \cite{Hardy04}, \cite{Montina08}. As a consequence we find that classical automata based simulation models \cite{Kleinmann11}, \cite{Blasiak13} are not sufficiently powerful to simulate quantum contextuality. We conclude by highlighting the interplay between results from automata models and their application to developing a general framework for quantum contextuality.Comment: 22 page

    Quantum Branching Programs and Space-Bounded Nonuniform Quantum Complexity

    Get PDF
    In this paper, the space complexity of nonuniform quantum computations is investigated. The model chosen for this are quantum branching programs, which provide a graphic description of sequential quantum algorithms. In the first part of the paper, simulations between quantum branching programs and nonuniform quantum Turing machines are presented which allow to transfer lower and upper bound results between the two models. In the second part of the paper, different variants of quantum OBDDs are compared with their deterministic and randomized counterparts. In the third part, quantum branching programs are considered where the performed unitary operation may depend on the result of a previous measurement. For this model a simulation of randomized OBDDs and exponential lower bounds are presented.Comment: 45 pages, 3 Postscript figures. Proofs rearranged, typos correcte

    On the limits of the communication complexity technique for proving lower bounds on the size of minimal NFA’s

    Get PDF
    AbstractIn contrast to the minimization of deterministic finite automata (DFA’s), the task of constructing a minimal nondeterministic finite automaton (NFA) for a given NFA is PSPACE-complete. Moreover, there are no polynomial approximation algorithms with a constant approximation ratio for estimating the number of states of minimal NFA’s.Since one is unable to efficiently estimate the size of a minimal NFA in an efficient way, one should ask at least for developing mathematical proof methods that help to prove good lower bounds on the size of a minimal NFA for a given regular language. Here we consider the robust and most successful lower bound proof technique that is based on communication complexity. In this paper it is proved that even a strong generalization of this method fails for some concrete regular languages.“To fail” is considered here in a very strong sense. There is an exponential gap between the size of a minimal NFA and the achievable lower bound for a specific sequence of regular languages.The generalization of the concept of communication protocols is also strong here. It is shown that cutting the input word into 2O(n1/4) pieces for a size n of a minimal nondeterministic finite automaton and investigating the necessary communication transfer between these pieces as parties of a multiparty protocol does not suffice to get good lower bounds on the size of minimal nondeterministic automata. It seems that for some regular languages one cannot really abstract from the automata model that cuts the input words into particular symbols of the alphabet and reads them one by one using its input head

    Two-Way Automata Making Choices Only at the Endmarkers

    Full text link
    The question of the state-size cost for simulation of two-way nondeterministic automata (2NFAs) by two-way deterministic automata (2DFAs) was raised in 1978 and, despite many attempts, it is still open. Subsequently, the problem was attacked by restricting the power of 2DFAs (e.g., using a restricted input head movement) to the degree for which it was already possible to derive some exponential gaps between the weaker model and the standard 2NFAs. Here we use an opposite approach, increasing the power of 2DFAs to the degree for which it is still possible to obtain a subexponential conversion from the stronger model to the standard 2DFAs. In particular, it turns out that subexponential conversion is possible for two-way automata that make nondeterministic choices only when the input head scans one of the input tape endmarkers. However, there is no restriction on the input head movement. This implies that an exponential gap between 2NFAs and 2DFAs can be obtained only for unrestricted 2NFAs using capabilities beyond the proposed new model. As an additional bonus, conversion into a machine for the complement of the original language is polynomial in this model. The same holds for making such machines self-verifying, halting, or unambiguous. Finally, any superpolynomial lower bound for the simulation of such machines by standard 2DFAs would imply LNL. In the same way, the alternating version of these machines is related to L =? NL =? P, the classical computational complexity problems.Comment: 23 page
    corecore