963 research outputs found

    Hierarchies of turing machines with restricted tape alphabet size

    Get PDF
    It is shown that for any real constants b>a≥0, multitape Turing machines operating in space L1(n)=[bnr] can accept more sets than those operating in space L2(n)=[anr] provided the number of work tapes and tape alphabet size are held fixed. It is also shown that Turing machines with k+1 work tapes are more powerful than those with k work tapes if the tape alphabet size and the amount of work space are held constant

    Inkdots as advice for finite automata

    Full text link
    We examine inkdots placed on the input string as a way of providing advice to finite automata, and establish the relations between this model and the previously studied models of advised finite automata. The existence of an infinite hierarchy of classes of languages that can be recognized with the help of increasing numbers of inkdots as advice is shown. The effects of different forms of advice on the succinctness of the advised machines are examined. We also study randomly placed inkdots as advice to probabilistic finite automata, and demonstrate the superiority of this model over its deterministic version. Even very slowly growing amounts of space can become a resource of meaningful use if the underlying advised model is extended with access to secondary memory, while it is famously known that such small amounts of space are not useful for unadvised one-way Turing machines.Comment: 14 page

    Multi-Head Finite Automata: Characterizations, Concepts and Open Problems

    Full text link
    Multi-head finite automata were introduced in (Rabin, 1964) and (Rosenberg, 1966). Since that time, a vast literature on computational and descriptional complexity issues on multi-head finite automata documenting the importance of these devices has been developed. Although multi-head finite automata are a simple concept, their computational behavior can be already very complex and leads to undecidable or even non-semi-decidable problems on these devices such as, for example, emptiness, finiteness, universality, equivalence, etc. These strong negative results trigger the study of subclasses and alternative characterizations of multi-head finite automata for a better understanding of the nature of non-recursive trade-offs and, thus, the borderline between decidable and undecidable problems. In the present paper, we tour a fragment of this literature

    One-Tape Turing Machine Variants and Language Recognition

    Full text link
    We present two restricted versions of one-tape Turing machines. Both characterize the class of context-free languages. In the first version, proposed by Hibbard in 1967 and called limited automata, each tape cell can be rewritten only in the first dd visits, for a fixed constant d2d\geq 2. Furthermore, for d=2d=2 deterministic limited automata are equivalent to deterministic pushdown automata, namely they characterize deterministic context-free languages. Further restricting the possible operations, we consider strongly limited automata. These models still characterize context-free languages. However, the deterministic version is less powerful than the deterministic version of limited automata. In fact, there exist deterministic context-free languages that are not accepted by any deterministic strongly limited automaton.Comment: 20 pages. This article will appear in the Complexity Theory Column of the September 2015 issue of SIGACT New

    Hierarchies of Inefficient Kernelizability

    Full text link
    The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam (STOC 2008) allows us to exclude the existence of polynomial kernels for a range of problems under reasonable complexity-theoretical assumptions. However, there are also some issues that are not addressed by this framework, including the existence of Turing kernels such as the "kernelization" of Leaf Out Branching(k) into a disjunction over n instances of size poly(k). Observing that Turing kernels are preserved by polynomial parametric transformations, we define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of ordinary parameterized complexity, by the PPT-closure of problems that seem likely to be fundamentally hard for efficient Turing kernelization. We find that several previously considered problems are complete for our fundamental hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log n

    On the possible Computational Power of the Human Mind

    Full text link
    The aim of this paper is to address the question: Can an artificial neural network (ANN) model be used as a possible characterization of the power of the human mind? We will discuss what might be the relationship between such a model and its natural counterpart. A possible characterization of the different power capabilities of the mind is suggested in terms of the information contained (in its computational complexity) or achievable by it. Such characterization takes advantage of recent results based on natural neural networks (NNN) and the computational power of arbitrary artificial neural networks (ANN). The possible acceptance of neural networks as the model of the human mind's operation makes the aforementioned quite relevant.Comment: Complexity, Science and Society Conference, 2005, University of Liverpool, UK. 23 page

    An in-between "implicit" and "explicit" complexity: Automata

    Get PDF
    Implicit Computational Complexity makes two aspects implicit, by manipulating programming languages rather than models of com-putation, and by internalizing the bounds rather than using external measure. We survey how automata theory contributed to complexity with a machine-dependant with implicit bounds model
    corecore