134 research outputs found

    Natural Halting Probabilities, Partial Randomness, and Zeta Functions

    Get PDF
    We introduce the zeta number, natural halting probability and natural complexity of a Turing machine and we relate them to Chaitin's Omega number, halting probability, and program-size complexity. A classification of Turing machines according to their zeta numbers is proposed: divergent, convergent and tuatara. We prove the existence of universal convergent and tuatara machines. Various results on (algorithmic) randomness and partial randomness are proved. For example, we show that the zeta number of a universal tuatara machine is c.e. and random. A new type of partial randomness, asymptotic randomness, is introduced. Finally we show that in contrast to classical (algorithmic) randomness--which cannot be naturally characterised in terms of plain complexity--asymptotic randomness admits such a characterisation.Comment: Accepted for publication in Information and Computin

    Von Neumann Normalisation of a Quantum Random Number Generator

    Full text link
    In this paper we study von Neumann un-biasing normalisation for ideal and real quantum random number generators, operating on finite strings or infinite bit sequences. In the ideal cases one can obtain the desired un-biasing. This relies critically on the independence of the source, a notion we rigorously define for our model. In real cases, affected by imperfections in measurement and hardware, one cannot achieve a true un-biasing, but, if the bias "drifts sufficiently slowly", the result can be arbitrarily close to un-biasing. For infinite sequences, normalisation can both increase or decrease the (algorithmic) randomness of the generated sequences. A successful application of von Neumann normalisation---in fact, any un-biasing transformation---does exactly what it promises, un-biasing, one (among infinitely many) symptoms of randomness; it will not produce "true" randomness.Comment: 27 pages, 2 figures. Updated to published versio

    Most Programs Stop Quickly or Never Halt

    Get PDF
    Since many real-world problems arising in the fields of compiler optimisation, automated software engineering, formal proof systems, and so forth are equivalent to the Halting Problem--the most notorious undecidable problem--there is a growing interest, not only academically, in understanding the problem better and in providing alternative solutions. Halting computations can be recognised by simply running them; the main difficulty is to detect non-halting programs. Our approach is to have the probability space extend over both space and time and to consider the probability that a random NN-bit program has halted by a random time. We postulate an a priori computable probability distribution on all possible runtimes and we prove that given an integer k>0, we can effectively compute a time bound T such that the probability that an N-bit program will eventually halt given that it has not halted by T is smaller than 2^{-k}. We also show that the set of halting programs (which is computably enumerable, but not computable) can be written as a disjoint union of a computable set and a set of effectively vanishing probability. Finally, we show that ``long'' runtimes are effectively rare. More formally, the set of times at which an N-bit program can stop after the time 2^{N+constant} has effectively zero density.Comment: Shortened abstract and changed format of references to match Adv. Appl. Math guideline

    A Non-Probabilistic Model of Relativised Predictability in Physics

    Full text link
    Little effort has been devoted to studying generalised notions or models of (un)predictability, yet is an important concept throughout physics and plays a central role in quantum information theory, where key results rely on the supposed inherent unpredictability of measurement outcomes. In this paper we continue the programme started in [1] developing a general, non-probabilistic model of (un)predictability in physics. We present a more refined model that is capable of studying different degrees of "relativised" unpredictability. This model is based on the ability for an agent, acting via uniform, effective means, to predict correctly and reproducibly the outcome of an experiment using finite information extracted from the environment. We use this model to study further the degree of unpredictability certified by different quantum phenomena, showing that quantum complementarity guarantees a form of relativised unpredictability that is weaker than that guaranteed by Kochen-Specker-type value indefiniteness. We exemplify further the difference between certification by complementarity and value indefiniteness by showing that, unlike value indefiniteness, complementarity is compatible with the production of computable sequences of bits.Comment: 10 page
    • …
    corecore