64 research outputs found

    The Busy Beaver Competition: a historical survey

    Full text link
    Tibor Rado defined the Busy Beaver Competition in 1962. He used Turing machines to give explicit definitions for some functions that are not computable and grow faster than any computable function. He put forward the problem of computing the values of these functions on numbers 1, 2, 3, ... More and more powerful computers have made possible the computation of lower bounds for these values. In 1988, Brady extended the definitions to functions on two variables. We give a historical survey of these works. The successive record holders in the Busy Beaver Competition are displayed, with their discoverers, the date they were found, and, for some of them, an analysis of their behavior.Comment: 70 page

    Busy beavers gone wild

    Full text link
    We show some incompleteness results a la Chaitin using the busy beaver functions. Then, with the help of ordinal logics, we show how to obtain a theory in which the values of the busy beaver functions can be provably established and use this to reveal a structure on the provability of the values of these functions

    Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks

    Get PDF
    We show that numerical approximations of Kolmogorov complexity (K) applied to graph adjacency matrices capture some group-theoretic and topological properties of graphs and empirical networks ranging from metabolic to social networks. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We show that approximations of K characterise synthetic and natural networks by their generating mechanisms, assigning lower algorithmic randomness to complex network models (Watts-Strogatz and Barabasi-Albert networks) and high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block Decomposition Method (BDM) measure, based on algorithmic probability theory.Comment: 15 2-column pages, 20 figures. Forthcoming in Physica A: Statistical Mechanics and its Application

    Random semicomputable reals revisited

    Full text link
    The aim of this expository paper is to present a nice series of results, obtained in the papers of Chaitin (1976), Solovay (1975), Calude et al. (1998), Kucera and Slaman (2001). This joint effort led to a full characterization of lower semicomputable random reals, both as those that can be expressed as a "Chaitin Omega" and those that are maximal for the Solovay reducibility. The original proofs were somewhat involved; in this paper, we present these results in an elementary way, in particular requiring only basic knowledge of algorithmic randomness. We add also several simple observations relating lower semicomputable random reals and busy beaver functions.Comment: 15 page

    Algorithmic statistics revisited

    Full text link
    The mission of statistics is to provide adequate statistical hypotheses (models) for observed data. But what is an "adequate" model? To answer this question, one needs to use the notions of algorithmic information theory. It turns out that for every data string xx one can naturally define "stochasticity profile", a curve that represents a trade-off between complexity of a model and its adequacy. This curve has four different equivalent definitions in terms of (1)~randomness deficiency, (2)~minimal description length, (3)~position in the lists of simple strings and (4)~Kolmogorov complexity with decompression time bounded by busy beaver function. We present a survey of the corresponding definitions and results relating them to each other

    Numerical Evaluation of Algorithmic Complexity for Short Strings: A Glance into the Innermost Structure of Randomness

    Full text link
    We describe an alternative method (to compression) that combines several theoretical and experimental results to numerically approximate the algorithmic (Kolmogorov-Chaitin) complexity of all ∑n=182n\sum_{n=1}^82^n bit strings up to 8 bits long, and for some between 9 and 16 bits long. This is done by an exhaustive execution of all deterministic 2-symbol Turing machines with up to 4 states for which the halting times are known thanks to the Busy Beaver problem, that is 11019960576 machines. An output frequency distribution is then computed, from which the algorithmic probability is calculated and the algorithmic complexity evaluated by way of the (Levin-Zvonkin-Chaitin) coding theorem.Comment: 29 pages, 5 figures. Version as accepted by the journal Applied Mathematics and Computatio

    The Busy Beaver Competition: a historical survey

    Get PDF
    70 pagesTibor Rado defined the Busy Beaver Competition in 1962. He used Turing machines to give explicit definitions for some functions that are not computable and grow faster than any computable function. He put forward the problem of computing the values of these functions on numbers 1, 2, 3, ... More and more powerful computers have made possible the computation of lower bounds for these values. In 1988, Brady extended the definitions to functions on two variables. We give a historical survey of these works. The successive record holders in the Busy Beaver Competition are displayed, with their discoverers, the date they were found, and, for some of them, an analysis of their behavior
    • …
    corecore