4,137 research outputs found

    Turing machines based on unsharp quantum logic

    Full text link
    In this paper, we consider Turing machines based on unsharp quantum logic. For a lattice-ordered quantum multiple-valued (MV) algebra E, we introduce E-valued non-deterministic Turing machines (ENTMs) and E-valued deterministic Turing machines (EDTMs). We discuss different E-valued recursively enumerable languages from width-first and depth-first recognition. We find that width-first recognition is equal to or less than depth-first recognition in general. The equivalence requires an underlying E value lattice to degenerate into an MV algebra. We also study variants of ENTMs. ENTMs with a classical initial state and ENTMs with a classical final state have the same power as ENTMs with quantum initial and final states. In particular, the latter can be simulated by ENTMs with classical transitions under a certain condition. Using these findings, we prove that ENTMs are not equivalent to EDTMs and that ENTMs are more powerful than EDTMs. This is a notable difference from the classical Turing machines.Comment: In Proceedings QPL 2011, arXiv:1210.029

    Bird's-eye view on Noise-Based Logic

    Full text link
    Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as (i) What does practical determinism mean? (ii) Is noise-based logic a Turing machine? (iii) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, (iv) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.Comment: paper in pres

    A probabilistic model of computing with words

    Get PDF
    AbstractComputing in the traditional sense involves inputs with strings of numbers and symbols rather than words, where words mean probability distributions over input alphabet, and are different from the words in classical formal languages and automata theory. In this paper our goal is to deal with probabilistic finite automata (PFAs), probabilistic Turing machines (PTMs), and probabilistic context-free grammars (PCFGs) by inputting strings of words (probability distributions). Specifically, (i) we verify that PFAs computing strings of words can be implemented by means of calculating strings of symbols (Theorem 1); (ii) we elaborate on PTMs with input strings of words, and particularly demonstrate by describing Example 2 that PTMs computing strings of words may not be directly performed through only computing strings of symbols, i.e., Theorem 1 may not hold for PTMs; (iii) we study PCFGs and thus PRGs with input strings of words, and prove that Theorem 1 does hold for PCFRs and PRGs (Theorem 2); a characterization of PRGs in terms of PFAs, and the equivalence between PCFGs and their Chomsky and Greibach normal forms, in the sense that the inputs are strings of words, are also presented. Finally, the main results obtained are summarized, and a number of related issues for further study are raised
    • …
    corecore