2,090 research outputs found

    Program schemes with deep pushdown storage.

    Get PDF
    Inspired by recent work of Meduna on deep pushdown automata, we consider the computational power of a class of basic program schemes, TeX, based around assignments, while-loops and non- deterministic guessing but with access to a deep pushdown stack which, apart from having the usual push and pop instructions, also has deep-push instructions which allow elements to be pushed to stack locations deep within the stack. We syntactically define sub-classes of TeX by restricting the occurrences of pops, pushes and deep-pushes and capture the complexity classes NP and PSPACE. Furthermore, we show that all problems accepted by program schemes of TeX are in EXPTIME

    Multi-Head Finite Automata: Characterizations, Concepts and Open Problems

    Full text link
    Multi-head finite automata were introduced in (Rabin, 1964) and (Rosenberg, 1966). Since that time, a vast literature on computational and descriptional complexity issues on multi-head finite automata documenting the importance of these devices has been developed. Although multi-head finite automata are a simple concept, their computational behavior can be already very complex and leads to undecidable or even non-semi-decidable problems on these devices such as, for example, emptiness, finiteness, universality, equivalence, etc. These strong negative results trigger the study of subclasses and alternative characterizations of multi-head finite automata for a better understanding of the nature of non-recursive trade-offs and, thus, the borderline between decidable and undecidable problems. In the present paper, we tour a fragment of this literature

    One-Tape Turing Machine Variants and Language Recognition

    Full text link
    We present two restricted versions of one-tape Turing machines. Both characterize the class of context-free languages. In the first version, proposed by Hibbard in 1967 and called limited automata, each tape cell can be rewritten only in the first dd visits, for a fixed constant d≥2d\geq 2. Furthermore, for d=2d=2 deterministic limited automata are equivalent to deterministic pushdown automata, namely they characterize deterministic context-free languages. Further restricting the possible operations, we consider strongly limited automata. These models still characterize context-free languages. However, the deterministic version is less powerful than the deterministic version of limited automata. In fact, there exist deterministic context-free languages that are not accepted by any deterministic strongly limited automaton.Comment: 20 pages. This article will appear in the Complexity Theory Column of the September 2015 issue of SIGACT New

    An in-between "implicit" and "explicit" complexity: Automata

    Get PDF
    Implicit Computational Complexity makes two aspects implicit, by manipulating programming languages rather than models of com-putation, and by internalizing the bounds rather than using external measure. We survey how automata theory contributed to complexity with a machine-dependant with implicit bounds model

    Number Sequence Prediction Problems for Evaluating Computational Powers of Neural Networks

    Full text link
    Inspired by number series tests to measure human intelligence, we suggest number sequence prediction tasks to assess neural network models' computational powers for solving algorithmic problems. We define the complexity and difficulty of a number sequence prediction task with the structure of the smallest automaton that can generate the sequence. We suggest two types of number sequence prediction problems: the number-level and the digit-level problems. The number-level problems format sequences as 2-dimensional grids of digits and the digit-level problems provide a single digit input per a time step. The complexity of a number-level sequence prediction can be defined with the depth of an equivalent combinatorial logic, and the complexity of a digit-level sequence prediction can be defined with an equivalent state automaton for the generation rule. Experiments with number-level sequences suggest that CNN models are capable of learning the compound operations of sequence generation rules, but the depths of the compound operations are limited. For the digit-level problems, simple GRU and LSTM models can solve some problems with the complexity of finite state automata. Memory augmented models such as Stack-RNN, Attention, and Neural Turing Machines can solve the reverse-order task which has the complexity of simple pushdown automaton. However, all of above cannot solve general Fibonacci, Arithmetic or Geometric sequence generation problems that represent the complexity of queue automata or Turing machines. The results show that our number sequence prediction problems effectively evaluate machine learning models' computational capabilities.Comment: Accepted to 2019 AAAI Conference on Artificial Intelligenc

    Input-Driven Tissue P Automata

    Get PDF
    We introduce several variants of input-driven tissue P automata where the rules to be applied only depend on the input symbol. Both strings and multisets are considered as input objects; the strings are either read from an input tape or defined by the sequence of symbols taken in, and the multisets are given in an input cell at the beginning of a computation, enclosed in a vesicle. Additional symbols generated during a computation are stored in this vesicle, too. An input is accepted when the vesicle reaches a final cell and it is empty. The computational power of some variants of input-driven tissue P automata is illustrated by examples and compared with the power of the input-driven variants of other automata as register machines and counter automata
    • …
    corecore