97 research outputs found

    A Sharp Separation of Sublogarithmic Space Complexity Classes

    Get PDF
    We present very sharp separation results for Turing machine sublogarithmic space complexity classes which are of the form: For any, arbitrarily slow growing, recursive nondecreasing and unbounded function s there is a k in N and an unary language L such that L in SPACE(s(n)+k) setminus SPACE(s(n-1)). For a binary L the supposition łims = infty is sufficient. The witness languages differ from each language from the lower classes on infinitely many words. We use so called demon (Turing) machines where the tape limit is given automatically without any construction. The results hold for deterministic and nondeterministic demon machines and also for alternating demon machines with a constant number of alternations, and with unlimited number of alternations. The sharpness of the results is ensured by using a very sensitive measure of space complexity of Turing computations which is defined as the amount of the tape required by the simulation (of the computation in question) on a fixed universal machine. As a proof tool we use a succint diagonalization method

    Inkdots as advice for finite automata

    Full text link
    We examine inkdots placed on the input string as a way of providing advice to finite automata, and establish the relations between this model and the previously studied models of advised finite automata. The existence of an infinite hierarchy of classes of languages that can be recognized with the help of increasing numbers of inkdots as advice is shown. The effects of different forms of advice on the succinctness of the advised machines are examined. We also study randomly placed inkdots as advice to probabilistic finite automata, and demonstrate the superiority of this model over its deterministic version. Even very slowly growing amounts of space can become a resource of meaningful use if the underlying advised model is extended with access to secondary memory, while it is famously known that such small amounts of space are not useful for unadvised one-way Turing machines.Comment: 14 page

    Some properties of one-pebble Turing machines with sublogarithmic space

    Get PDF
    AbstractThis paper investigates some aspects of the accepting powers of deterministic, nondeterministic, and alternating one-pebble Turing machines with spaces between loglogn and logn. We first investigate a relationship between the accepting powers of two-way deterministic one-counter automata and deterministic (or nondeterministic) one-pebble Turing machines, and show that they are incomparable. Then we investigate a relationship between nondeterminism and alternation, and show that there exists a language accepted by a strongly loglogn space-bounded alternating one-pebble Turing machine, but not accepted by any weakly o(logn) space-bounded nondeterministic one-pebble Turing machine. Finally, we investigate a space hierarchy, and show that for any one-pebble (fully) space constructible function L(n)⩽logn, and for any function L′(n)=o(L(n)), there exists a language accepted by a strongly L(n) space-bounded deterministic one-pebble Turing machine, but not accepted by any weakly L′(n) space-bounded nondeterministic one-pebble Turing machine

    Unbounded-error quantum computation with small space bounds

    Full text link
    We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound s s satisfying s(n)=o(loglogn) s(n)=o(\log \log n) . For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for s(n)=o(logn)s(n)=o(\log n) . We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.Comment: A preliminary version of this paper appeared in the Proceedings of the Fourth International Computer Science Symposium in Russia, pages 356--367, 200

    Space hierarchy theorem revised

    Get PDF
    AbstractWe show that, for an arbitrary function h(n) and each recursive function ℓ(n), that are separated by a nondeterministically fully space constructible g(n), such that h(n)∈Ω(g(n)) but ℓ(n)∉Ω(g(n)), there exists a unary language L in NSPACE(h(n)) that is not contained in NSPACE(ℓ(n)). The same holds for the deterministic case.The main contribution to the well-known Space Hierarchy Theorem is that (i) the language L separating the two space classes is unary (tally), (ii) the hierarchy is independent of whether h(n) or ℓ(n) are in Ω(logn) or in o(logn), (iii) the functions h(n) or ℓ(n) themselves need not be space constructible nor monotone increasing, (iv) the hierarchy is established both for strong and weak space complexity classes. This allows us to present unary languages in such complexity classes as, for example, NSPACE(loglogn·log∗n)⧹NSPACE(loglogn), using a plain diagonalization

    Sublinearly space bounded iterative arrays

    Get PDF
    Iterative arrays (IAs) are a, parallel computational model with a sequential processing of the input. They are one-dimensional arrays of interacting identical deterministic finite automata. In this note, realtime-lAs with sublinear space bounds are used to accept formal languages. The existence of a proper hierarchy of space complexity classes between logarithmic anel linear space bounds is proved. Furthermore, an optimal spacc lower bound for non-regular language recognition is shown. Key words: Iterative arrays, cellular automata, space bounded computations, decidability questions, formal languages, theory of computatio

    One-Tape Turing Machine Variants and Language Recognition

    Full text link
    We present two restricted versions of one-tape Turing machines. Both characterize the class of context-free languages. In the first version, proposed by Hibbard in 1967 and called limited automata, each tape cell can be rewritten only in the first dd visits, for a fixed constant d2d\geq 2. Furthermore, for d=2d=2 deterministic limited automata are equivalent to deterministic pushdown automata, namely they characterize deterministic context-free languages. Further restricting the possible operations, we consider strongly limited automata. These models still characterize context-free languages. However, the deterministic version is less powerful than the deterministic version of limited automata. In fact, there exist deterministic context-free languages that are not accepted by any deterministic strongly limited automaton.Comment: 20 pages. This article will appear in the Complexity Theory Column of the September 2015 issue of SIGACT New
    corecore