516 research outputs found

    One-Tape Turing Machine Variants and Language Recognition

    Full text link
    We present two restricted versions of one-tape Turing machines. Both characterize the class of context-free languages. In the first version, proposed by Hibbard in 1967 and called limited automata, each tape cell can be rewritten only in the first dd visits, for a fixed constant d2d\geq 2. Furthermore, for d=2d=2 deterministic limited automata are equivalent to deterministic pushdown automata, namely they characterize deterministic context-free languages. Further restricting the possible operations, we consider strongly limited automata. These models still characterize context-free languages. However, the deterministic version is less powerful than the deterministic version of limited automata. In fact, there exist deterministic context-free languages that are not accepted by any deterministic strongly limited automaton.Comment: 20 pages. This article will appear in the Complexity Theory Column of the September 2015 issue of SIGACT New

    An Experiment in Ping-Pong Protocol Verification by Nondeterministic Pushdown Automata

    Get PDF
    An experiment is described that confirms the security of a well-studied class of cryptographic protocols (Dolev-Yao intruder model) can be verified by two-way nondeterministic pushdown automata (2NPDA). A nondeterministic pushdown program checks whether the intersection of a regular language (the protocol to verify) and a given Dyck language containing all canceling words is empty. If it is not, an intruder can reveal secret messages sent between trusted users. The verification is guaranteed to terminate in cubic time at most on a 2NPDA-simulator. The interpretive approach used in this experiment simplifies the verification, by separating the nondeterministic pushdown logic and program control, and makes it more predictable. We describe the interpretive approach and the known transformational solutions, and show they share interesting features. Also noteworthy is how abstract results from automata theory can solve practical problems by programming language means.Comment: In Proceedings MARS/VPT 2018, arXiv:1803.0866

    Proofs of proximity for context-free languages and read-once branching programs

    Get PDF
    Proofs of proximity are probabilistic proof systems in which the verifier only queries a sub-linear number of input bits, and soundness only means that, with high probability, the input is close to an accepting input. In their minimal form, called Merlin-Arthur proofs of proximity ( MAP ), the verifier receives, in addition to query access to the input, also free access to an explicitly given short (sub-linear) proof. A more general notion is that of an interactive proof of proximity ( IPP ), in which the verifier is allowed to interact with an all-powerful, yet untrusted, prover. MAP s and IPP s may be thought of as the NP and IP analogues of property testing, respectively

    Distributing Labels on Infinite Trees

    Get PDF
    Sturmian words are infinite binary words with many equivalent definitions: They have a minimal factor complexity among all aperiodic sequences; they are balanced sequences (the labels 0 and 1 are as evenly distributed as possible) and they can be constructed using a mechanical definition. All this properties make them good candidates for being extremal points in scheduling problems over two processors. In this paper, we consider the problem of generalizing Sturmian words to trees. The problem is to evenly distribute labels 0 and 1 over infinite trees. We show that (strongly) balanced trees exist and can also be constructed using a mechanical process as long as the tree is irrational. Such trees also have a minimal factor complexity. Therefore they bring the hope that extremal scheduling properties of Sturmian words can be extended to such trees, as least partially. Such possible extensions are illustrated by one such example.Comment: 30 pages, use pgf/tik

    Transformers Learn Shortcuts to Automata

    Full text link
    Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine. However, Transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. This raises the question: what solutions are learned by these shallow and non-recurrent models? We find that a low-depth Transformer can represent the computations of any finite-state automaton (thus, any bounded-memory algorithm), by hierarchically reparameterizing its recurrent dynamics. Our theoretical results characterize shortcut solutions, whereby a Transformer with o(T)o(T) layers can exactly replicate the computation of an automaton on an input sequence of length TT. We find that polynomial-sized O(logT)O(\log T)-depth solutions always exist; furthermore, O(1)O(1)-depth simulators are surprisingly common, and can be understood using tools from Krohn-Rhodes theory and circuit complexity. Empirically, we perform synthetic experiments by training Transformers to simulate a wide variety of automata, and show that shortcut solutions can be learned via standard training. We further investigate the brittleness of these solutions and propose potential mitigations
    corecore