30 research outputs found

    Quantum Random Self-Modifiable Computation

    Full text link
    Among the fundamental questions in computer science, at least two have a deep impact on mathematics. What can computation compute? How many steps does a computation require to solve an instance of the 3-SAT problem? Our work addresses the first question, by introducing a new model called the ex-machine. The ex-machine executes Turing machine instructions and two special types of instructions. Quantum random instructions are physically realizable with a quantum random number generator. Meta instructions can add new states and add new instructions to the ex-machine. A countable set of ex-machines is constructed, each with a finite number of states and instructions; each ex-machine can compute a Turing incomputable language, whenever the quantum randomness measurements behave like unbiased Bernoulli trials. In 1936, Alan Turing posed the halting problem for Turing machines and proved that this problem is unsolvable for Turing machines. Consider an enumeration E_a(i) = (M_i, T_i) of all Turing machines M_i and initial tapes T_i. Does there exist an ex-machine X that has at least one evolutionary path X --> X_1 --> X_2 --> . . . --> X_m, so at the mth stage ex-machine X_m can correctly determine for 0 <= i <= m whether M_i's execution on tape T_i eventually halts? We demonstrate an ex-machine Q(x) that has one such evolutionary path. The existence of this evolutionary path suggests that David Hilbert was not misguided to propose in 1900 that mathematicians search for finite processes to help construct mathematical proofs. Our refinement is that we cannot use a fixed computer program that behaves according to a fixed set of mechanical rules. We must pursue methods that exploit randomness and self-modification so that the complexity of the program can increase as it computes.Comment: 50 pages, 3 figure

    Collatz Computation Sequence for Sufficient Large Integers is Random

    Get PDF
    The main results in the paper are as follows: (1) We randomly select an extremely large integer and verify whether it can return to 1. The largest one has been verified has length of 6000000 bits, which is overwhelmingly much larger than currently known and verified, e.g., 128 bits, and its Collatz computation sequence consists of 28911397 `I\u27 and `O\u27, only by an ordinary laptop. (2) We propose an dedicated algorithm that can compute 3x+1 for extremely large integers in million bit scale, by replacing multiplication with bit addition, and further only by logical condition judgement. (3) We discovery that the ratio - the count of `O\u27 over the count of `I\u27 in computation sequence goes to 1 asymptotically with the growth of starting integers. (4) We further discover that once the length of starting integer is sufficient large, e.g., 500000 bits, the corresponding computation sequence (in which `I\u27 is replaced with 1 and `O\u27 is replaced with 0), presents sufficient randomness as a bit sequence. We firstly obtain the computation sequence of randomly selected integer with L bit length, where L is 500000, 1000000, 2000000, 3000000, 4000000, 5000000, 6000000, by our proposed algorithm for extremely large integers. We evaluate the randomness of all computation sequences by both NIST SP 800-22 and GM/T 0005-2021. All sequences can pass the tests, and especially, the larger the better. (5) We thus propose an algorithm for random bit sequence generator by only using logical judgement (e.g., logic gates) and less than 100 lines in ANSI C. The throughput of the generator is about 625.693 bits/s over an ordinary laptop with Intel Core i7 CPU (1.8GHz)

    Capabilities and Limitations of Infinite-Time Computation

    Get PDF
    The relatively new field of infinitary computability strives to characterize thecapabilities and limitations of infinite-time computation; that is, computations ofpotentially transfinite length. Throughout our work, we focus on the prototypicalmodel of infinitary computation: Hamkins and Lewis\u27 infinite-time Turing machine(ITTM), which generalizes the classical Turing machine model in a naturalway.This dissertation adopts a novel approach to this study: whereas most of theliterature, starting with Hamkins and Lewis\u27 debut of the ITTM model, pursuesset-theoretic questions using a set-theoretic approach, we employ arguments thatare truly computational in character. Indeed, we fully utilize analogues of classicalresults from finitary computability, such as the s-m-n Theorem and existence ofuniversal machines, and for the most part, judiciously restrict our attention to theclassical setting of computations over the natural numbers.In Chapter 2 of this dissertation, we state, and derive, as necessary, the aforementionedanalogues of the classical results, as well as some useful constructs for ITTM programming. With this due paid, the subsequent work in Chapters 3 and 4 requires little in the way of programming, and that programming which is required in Chapter 5 is dramatically streamlined. In Chapter 3, we formulate two analogues of one of Rado\u27s busy beaver functions from classical computability, and show, in analogy with Rado\u27s results, that they grow faster than a wide class of infinite-time computable functions. Chapter 4 is tasked with developing a system of ordinal notations via a natural approach involving infinite-time computation, as well as an associated fast-growing hierarchy of functions over the natural numbers. We then demonstrate that the busy beaver functions from Chapter 3 grow faster than the functions which appear in a significant portion of this hierarchy. Finally, we debut, in Chapter 5, two enhancements of the ITTM model whichcan self-modify certain aspects of their underlying software and hardware mid-computation, and show the somewhat surprising fact that, under some reasonableassumptions, these new models of infinitary computation compute precisely thesame functions as the original ITTM model

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    LURK: Lambda, the Ultimate Recursive Knowledge

    Get PDF
    We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs

    Submicron Systems Architecture: Semiannual Technical Report

    Get PDF
    No abstract available
    corecore