32,070 research outputs found

    Computing A Glimpse of Randomness

    Full text link
    A Chaitin Omega number is the halting probability of a universal Chaitin (self-delimiting Turing) machine. Every Omega number is both computably enumerable (the limit of a computable, increasing, converging sequence of rationals) and random (its binary expansion is an algorithmic random sequence). In particular, every Omega number is strongly non-computable. The aim of this paper is to describe a procedure, which combines Java programming and mathematical proofs, for computing the exact values of the first 64 bits of a Chaitin Omega: 0000001000000100000110001000011010001111110010111011101000010000. Full description of programs and proofs will be given elsewhere.Comment: 16 pages; Experimental Mathematics (accepted

    On algorithmic equivalence of instruction sequences for computing bit string functions

    Get PDF
    Every partial function from bit strings of a given length to bit strings of a possibly different given length can be computed by a finite instruction sequence that contains only instructions to set and get the content of Boolean registers, forward jump instructions, and a termination instruction. We look for an equivalence relation on instruction sequences of this kind that captures to a reasonable degree the intuitive notion that two instruction sequences express the same algorithm.Comment: 27 pages, the preliminaries have textual overlaps with the preliminaries in arXiv:1308.0219 [cs.PL], arXiv:1312.1529 [cs.PL], and arXiv:1312.1812 [cs.PL]; 27 pages, three paragraphs about Milner's algorithmic equivalence hypothesis added to concluding remarks; 26 pages, several minor improvements of the presentation mad

    A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences

    Get PDF
    Given the widespread use of lossless compression algorithms to approximate algorithmic (Kolmogorov-Chaitin) complexity, and that lossless compression algorithms fall short at characterizing patterns other than statistical ones not different to entropy estimations, here we explore an alternative and complementary approach. We study formal properties of a Levin-inspired measure mm calculated from the output distribution of small Turing machines. We introduce and justify finite approximations mkm_k that have been used in some applications as an alternative to lossless compression algorithms for approximating algorithmic (Kolmogorov-Chaitin) complexity. We provide proofs of the relevant properties of both mm and mkm_k and compare them to Levin's Universal Distribution. We provide error estimations of mkm_k with respect to mm. Finally, we present an application to integer sequences from the Online Encyclopedia of Integer Sequences which suggests that our AP-based measures may characterize non-statistical patterns, and we report interesting correlations with textual, function and program description lengths of the said sequences.Comment: As accepted by the journal Complexity (Wiley/Hindawi

    Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity

    Get PDF
    The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's minimal sufficient statistic. In general we show that data compression is almost always the best strategy, both in hypothesis identification and prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor

    Dimension Extractors and Optimal Decompression

    Full text link
    A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every Sequence is Decompressible from a Random One" (cs.IT/0511074, at http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with the conference paper incorporated as section 5 of this new combined paper. The combined paper was accepted to the journal Theory of Computing Systems, as part of a special issue of invited papers from the second conference on Computability in Europe, 200
    corecore