26,697 research outputs found

    Algorithmic randomness and stochastic selection function

    Full text link
    We show algorithmic randomness versions of the two classical theorems on subsequences of normal numbers. One is Kamae-Weiss theorem (Kamae 1973) on normal numbers, which characterize the selection function that preserves normal numbers. Another one is the Steinhaus (1922) theorem on normal numbers, which characterize the normality from their subsequences. In van Lambalgen (1987), an algorithmic analogy to Kamae-Weiss theorem is conjectured in terms of algorithmic randomness and complexity. In this paper we consider two types of algorithmic random sequence; one is ML-random sequences and the other one is the set of sequences that have maximal complexity rate. Then we show algorithmic randomness versions of corresponding theorems to the above classical results.Comment: submitted to CCR2012 special issue. arXiv admin note: text overlap with arXiv:1106.315

    Algorithmic Randomness as Foundation of Inductive Reasoning and Artificial Intelligence

    Full text link
    This article is a brief personal account of the past, present, and future of algorithmic randomness, emphasizing its role in inductive inference and artificial intelligence. It is written for a general audience interested in science and philosophy. Intuitively, randomness is a lack of order or predictability. If randomness is the opposite of determinism, then algorithmic randomness is the opposite of computability. Besides many other things, these concepts have been used to quantify Ockham's razor, solve the induction problem, and define intelligence.Comment: 9 LaTeX page

    Natural Halting Probabilities, Partial Randomness, and Zeta Functions

    Get PDF
    We introduce the zeta number, natural halting probability and natural complexity of a Turing machine and we relate them to Chaitin's Omega number, halting probability, and program-size complexity. A classification of Turing machines according to their zeta numbers is proposed: divergent, convergent and tuatara. We prove the existence of universal convergent and tuatara machines. Various results on (algorithmic) randomness and partial randomness are proved. For example, we show that the zeta number of a universal tuatara machine is c.e. and random. A new type of partial randomness, asymptotic randomness, is introduced. Finally we show that in contrast to classical (algorithmic) randomness--which cannot be naturally characterised in terms of plain complexity--asymptotic randomness admits such a characterisation.Comment: Accepted for publication in Information and Computin

    The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy

    Full text link
    The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness we quantify the cost in information loss associated with reprogramming. To illustrate this we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.Comment: 30 page

    Algorithmic Randomness

    Get PDF
    We consider algorithmic randomness in the Cantor space C of the infinite binary sequences. By an algorithmic randomness concept one specifies a set of elements of C, each of which is assigned the property of being random. Miscellaneous notions from computability theory are used in the definitions of randomness concepts that are essentially rooted in the following three intuitive randomness requirements: the initial segments of a random sequence should be effectively incompressible, no random sequence should be an element of an effective measure null set containing sequences with an “exceptional property”, and finally, considering betting games, in which the bits of a sequence are guessed successively, there should be no effective betting strategy that helps a player win an unbounded amount of capital on a random sequence. For various formalizations of these requirements one uses versions of Kolmogorov complexity, of tests, and of martingales, respectively. In case any of these notions is used in the definition of a randomness concept, one may ask in general for fundamental equivalent definitions in terms of the respective other two notions. This was a long-standing open question w.r.t. computable randomness, a central concept that had been introduced by Schnorr via martingales. In this thesis, we introduce bounded tests that we use to give a characterization of computable randomness in terms of tests. Our result was obtained independently of the prior test characterization of computable randomness due to Downey, Griffiths, and LaForte, who defined graded tests for their result. Based on bounded tests, we define bounded machines which give rise to a version of Kolmogorov complexity that we use to prove another characterization of computable randomness. This result, as in analog situations, allows for the introduction of interesting lowness and triviality properties that are, roughly speaking, “anti-randomness” properties. We define and study the notions lowness for bounded machines and bounded triviality. Using a theorem due to Nies, it can be shown that only the computable sequences are low for bounded machines. Further we show some interesting properties of bounded machines, and we demonstrate that every boundedly trivial sequence is K-trivial. Furthermore we define lowness for computable machines, a lowness notion in the setting of Schnorr randomness. We prove that a sequence is low for computable machines if and only if it is computably traceable. Gacs and independently Kucera proved a central theorem which states that every sequence is effectively decodable from a suitable Martin-Löf random sequence. We present a somewhat easier proof of this theorem, where we construct a sequence with the required property by diagonalizing against appropriate martingales. By a variant of that construction we prove that there exists a computably random sequence that is weak truth-table autoreducible. Further, we show that a sequence is computably enumerable self-reducible if and only if its associated real is computably enumerable. Finally we investigate interrelations between the Lebesgue measure and effective measures on C. We prove the following extension of a result due to Book, Lutz, and Wagner: A union of Pi-0-1 classes that is closed under finite variations has Lebesgue measure zero if and only if it contains no Kurtz random real. However we demonstrate that even a Sigma-0-2 class with Lebesgue measure zero need not be a Kurtz null class. Turning to Almost classes, we show among other things that every Almost class with respect to a bounded reducibility has computable packing dimension zero
    • …
    corecore