19,592 research outputs found

    Making Classical Ground State Spin Computing Fault-Tolerant

    Full text link
    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error free manner when working at non-zero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.Comment: 24 pages, 1 figur

    Non-causal computation

    Full text link
    Computation models such as circuits describe sequences of computation steps that are carried out one after the other. In other words, algorithm design is traditionally subject to the restriction imposed by a fixed causal order. We address a novel computing paradigm beyond quantum computing, replacing this assumption by mere logical consistency: We study non-causal circuits, where a fixed time structure within a gate is locally assumed whilst the global causal structure between the gates is dropped. We present examples of logically consistent non- causal circuits outperforming all causal ones; they imply that suppressing loops entirely is more restrictive than just avoiding the contradictions they can give rise to. That fact is already known for correlations as well as for communication, and we here extend it to computation.Comment: 6 pages, 4 figure

    Lower Bounds for RAMs and Quantifier Elimination

    Full text link
    We are considering RAMs NnN_{n}, with wordlength n=2dn=2^{d}, whose arithmetic instructions are the arithmetic operations multiplication and addition modulo 2n2^{n}, the unary function min{2x,2n1} \min\lbrace 2^{x}, 2^{n}-1\rbrace, the binary functions x/y\lfloor x/y\rfloor (with x/0=0\lfloor x/0 \rfloor =0), max(x,y)\max(x,y), min(x,y)\min(x,y), and the boolean vector operations ,,¬\wedge,\vee,\neg defined on 0,10,1 sequences of length nn. It also has the other RAM instructions. The size of the memory is restricted only by the address space, that is, it is 2n2^{n} words. The RAMs has a finite instruction set, each instruction is encoded by a fixed natural number independently of nn. Therefore a program PP can run on each machine NnN_{n}, if n=2dn=2^{d} is sufficiently large. We show that there exists an ϵ>0\epsilon>0 and a program PP, such that it satisfies the following two conditions. (i) For all sufficiently large n=2dn=2^{d}, if PP running on NnN_{n} gets an input consisting of two words aa and bb, then, in constant time, it gives a 0,10,1 output Pn(a,b)P_{n}(a,b). (ii) Suppose that QQ is a program such that for each sufficiently large n=2dn=2^{d}, if QQ, running on NnN_{n}, gets a word aa of length nn as an input, then it decides whether there exists a word bb of length nn such that Pn(a,b)=0P_{n}(a,b)=0. Then, for infinitely many positive integers dd, there exists a word aa of length n=2dn=2^{d}, such that the running time of QQ on NnN_{n} at input aa is at least ϵ(logd)12(loglogd)1\epsilon (\log d)^{\frac{1}{2}} (\log \log d)^{-1}

    Empirical Encounters with Computational Irreducibility and Unpredictability

    Full text link
    There are several forms of irreducibility in computing systems, ranging from undecidability to intractability to nonlinearity. This paper is an exploration of the conceptual issues that have arisen in the course of investigating speed-up and slowdown phenomena in small Turing machines. We present the results of a test that may spur experimental approaches to the notion of computational irreducibility. The test involves a systematic attempt to outrun the computation of a large number of small Turing machines (all 3 and 4 state, 2 symbol) by means of integer sequence prediction using a specialized function finder program. This massive experiment prompts an investigation into rates of convergence of decision procedures and the decidability of sets in addition to a discussion of the (un)predictability of deterministic computing systems in practice. We think this investigation constitutes a novel approach to the discussion of an epistemological question in the context of a computer simulation, and thus represents an interesting exploration at the boundary between philosophical concerns and computational experiments.Comment: 18 pages, 4 figure
    corecore