3,650 research outputs found

    Finite state verifiers with constant randomness

    Full text link
    We give a new characterization of NL\mathsf{NL} as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.Comment: 17 pages. An improved versio

    Finite automata with advice tapes

    Full text link
    We define a model of advised computation by finite automata where the advice is provided on a separate tape. We consider several variants of the model where the advice is deterministic or randomized, the input tape head is allowed real-time, one-way, or two-way access, and the automaton is classical or quantum. We prove several separation results among these variants, demonstrate an infinite hierarchy of language classes recognized by automata with increasing advice lengths, and establish the relationships between this and the previously studied ways of providing advice to finite automata.Comment: Corrected typo

    Inkdots as advice for finite automata

    Full text link
    We examine inkdots placed on the input string as a way of providing advice to finite automata, and establish the relations between this model and the previously studied models of advised finite automata. The existence of an infinite hierarchy of classes of languages that can be recognized with the help of increasing numbers of inkdots as advice is shown. The effects of different forms of advice on the succinctness of the advised machines are examined. We also study randomly placed inkdots as advice to probabilistic finite automata, and demonstrate the superiority of this model over its deterministic version. Even very slowly growing amounts of space can become a resource of meaningful use if the underlying advised model is extended with access to secondary memory, while it is famously known that such small amounts of space are not useful for unadvised one-way Turing machines.Comment: 14 page

    The RAM equivalent of P vs. RP

    Full text link
    One of the fundamental open questions in computational complexity is whether the class of problems solvable by use of stochasticity under the Random Polynomial time (RP) model is larger than the class of those solvable in deterministic polynomial time (P). However, this question is only open for Turing Machines, not for Random Access Machines (RAMs). Simon (1981) was able to show that for a sufficiently equipped Random Access Machine, the ability to switch states nondeterministically does not entail any computational advantage. However, in the same paper, Simon describes a different (and arguably more natural) scenario for stochasticity under the RAM model. According to Simon's proposal, instead of receiving a new random bit at each execution step, the RAM program is able to execute the pseudofunction RAND(y)\textit{RAND}(y), which returns a uniformly distributed random integer in the range [0,y)[0,y). Whether the ability to allot a random integer in this fashion is more powerful than the ability to allot a random bit remained an open question for the last 30 years. In this paper, we close Simon's open problem, by fully characterising the class of languages recognisable in polynomial time by each of the RAMs regarding which the question was posed. We show that for some of these, stochasticity entails no advantage, but, more interestingly, we show that for others it does.Comment: 23 page

    Unbounded-error quantum computation with small space bounds

    Full text link
    We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound s s satisfying s(n)=o(loglogn) s(n)=o(\log \log n) . For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for s(n)=o(logn)s(n)=o(\log n) . We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.Comment: A preliminary version of this paper appeared in the Proceedings of the Fourth International Computer Science Symposium in Russia, pages 356--367, 200
    corecore