15,565 research outputs found

    Unbounded-error quantum computation with small space bounds

    Full text link
    We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound s s satisfying s(n)=o(loglogn) s(n)=o(\log \log n) . For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for s(n)=o(logn)s(n)=o(\log n) . We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.Comment: A preliminary version of this paper appeared in the Proceedings of the Fourth International Computer Science Symposium in Russia, pages 356--367, 200

    One-Tape Turing Machine Variants and Language Recognition

    Full text link
    We present two restricted versions of one-tape Turing machines. Both characterize the class of context-free languages. In the first version, proposed by Hibbard in 1967 and called limited automata, each tape cell can be rewritten only in the first dd visits, for a fixed constant d2d\geq 2. Furthermore, for d=2d=2 deterministic limited automata are equivalent to deterministic pushdown automata, namely they characterize deterministic context-free languages. Further restricting the possible operations, we consider strongly limited automata. These models still characterize context-free languages. However, the deterministic version is less powerful than the deterministic version of limited automata. In fact, there exist deterministic context-free languages that are not accepted by any deterministic strongly limited automaton.Comment: 20 pages. This article will appear in the Complexity Theory Column of the September 2015 issue of SIGACT New

    Inkdots as advice for finite automata

    Full text link
    We examine inkdots placed on the input string as a way of providing advice to finite automata, and establish the relations between this model and the previously studied models of advised finite automata. The existence of an infinite hierarchy of classes of languages that can be recognized with the help of increasing numbers of inkdots as advice is shown. The effects of different forms of advice on the succinctness of the advised machines are examined. We also study randomly placed inkdots as advice to probabilistic finite automata, and demonstrate the superiority of this model over its deterministic version. Even very slowly growing amounts of space can become a resource of meaningful use if the underlying advised model is extended with access to secondary memory, while it is famously known that such small amounts of space are not useful for unadvised one-way Turing machines.Comment: 14 page

    Finite state verifiers with constant randomness

    Full text link
    We give a new characterization of NL\mathsf{NL} as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.Comment: 17 pages. An improved versio

    On the possible Computational Power of the Human Mind

    Full text link
    The aim of this paper is to address the question: Can an artificial neural network (ANN) model be used as a possible characterization of the power of the human mind? We will discuss what might be the relationship between such a model and its natural counterpart. A possible characterization of the different power capabilities of the mind is suggested in terms of the information contained (in its computational complexity) or achievable by it. Such characterization takes advantage of recent results based on natural neural networks (NNN) and the computational power of arbitrary artificial neural networks (ANN). The possible acceptance of neural networks as the model of the human mind's operation makes the aforementioned quite relevant.Comment: Complexity, Science and Society Conference, 2005, University of Liverpool, UK. 23 page

    Quantum computation with devices whose contents are never read

    Full text link
    In classical computation, a "write-only memory" (WOM) is little more than an oxymoron, and the addition of WOM to a (deterministic or probabilistic) classical computer brings no advantage. We prove that quantum computers that are augmented with WOM can solve problems that neither a classical computer with WOM nor a quantum computer without WOM can solve, when all other resource bounds are equal. We focus on realtime quantum finite automata, and examine the increase in their power effected by the addition of WOMs with different access modes and capacities. Some problems that are unsolvable by two-way probabilistic Turing machines using sublogarithmic amounts of read/write memory are shown to be solvable by these enhanced automata.Comment: 32 pages, a preliminary version of this work was presented in the 9th International Conference on Unconventional Computation (UC2010

    Towards a more natural and intelligent interface with embodied conversation agent

    Get PDF
    Conversational agent also known as chatterbots are computer programs which are designed to converse like a human as much as their intelligent allows. In many ways, they are the embodiment of Turing's vision. The ability for computers to converse with human users using natural language would arguably increase their usefulness. Recent advances in Natural Language Processing (NLP) and Artificial Intelligence (AI) in general have advances this field in realizing the vision of a more humanoid interactive system. This paper presents and discusses the use of embodied conversation agent (ECA) for the imitation games. This paper also presents the technical design of our ECA and its performance. In the interactive media industry, it can also been observed that the ECA are getting popular
    corecore