378 research outputs found

    Complexity of complexity and strings with maximal plain and prefix Kolmogorov complexity

    Full text link
    Peter Gacs showed (Gacs 1974) that for every n there exists a bit string x of length n whose plain complexity C(x) has almost maximal conditional complexity relative to x, i.e., C(C(x)|x) > log n - log^(2) n - O(1). (Here log^(2) i = log log i.) Following Elena Kalinina (Kalinina 2011), we provide a simple game-based proof of this result; modifying her argument, we get a better (and tight) bound log n - O(1). We also show the same bound for prefix-free complexity. Robert Solovay showed (Solovay 1975) that infinitely many strings x have maximal plain complexity but not maximal prefix complexity (among the strings of the same length): for some c there exist infinitely many x such that |x| - C(x) log^(2) |x| - c log^(3) |x|. In fact, the results of Solovay and Gacs are closely related. Using the result above, we provide a short proof for Solovay's result. We also generalize it by showing that for some c and for all n there are strings x of length n with n - C (x) < c and n + K(n) - K(x) > K(K(n)|n) - 3 K(K(K(n)|n)|n) - c. We also prove a close upper bound K(K(n)|n) + O(1). Finally, we provide a direct game proof for Joseph Miller's generalization (Miller 2006) of the same Solovay's theorem: if a co-enumerable set (a set with c.e. complement) contains for every length a string of this length, then it contains infinitely many strings x such that |x| + K(|x|) - K(x) > log^(2) |x| + O(log^(3) |x|).Comment: 13 pages, 1 figur

    Relating and contrasting plain and prefix Kolmogorov complexity

    Get PDF
    In [3] a short proof is given that some strings have maximal plain Kolmogorov complexity but not maximal prefix-free complexity. The proof uses Levin's symmetry of information, Levin's formula relating plain and prefix complexity and Gacs' theorem that complexity of complexity given the string can be high. We argue that the proof technique and results mentioned above are useful to simplify existing proofs and to solve open questions. We present a short proof of Solovay's result [21] relating plain and prefix complexity: K(x)=C(x)+CC(x)+O(CCC(x))K (x) = C (x) + CC (x) + O(CCC (x)) and C(x)=K(x)KK(x)+O(KKK(x))C (x) = K (x) - KK (x) + O(KKK (x)), (here CC(x)CC(x) denotes C(C(x))C(C(x)), etc.). We show that there exist ω\omega such that lim infC(ω1ωn)C(n)\liminf C(\omega_1\dots \omega_n) - C(n) is infinite and lim infK(ω1ωn)K(n)\liminf K(\omega_1\dots \omega_n) - K(n) is finite, i.e. the infinitely often C-trivial reals are not the same as the infinitely often K-trivial reals (i.e. [1,Question 1]). Solovay showed that for infinitely many xx we have xC(x)O(1)|x| - C (x) \le O(1) and x+K(x)K(x)log(2)xO(log(3)x)|x| + K (|x|) - K (x) \ge \log^{(2)} |x| - O(\log^{(3)} |x|), (here x|x| denotes the length of xx and log(2)=loglog\log^{(2)} = \log\log, etc.). We show that this result holds for prefixes of some 2-random sequences. Finally, we generalize our proof technique and show that no monotone relation exists between expectation and probability bounded randomness deficiency (i.e. [6, Question 1]).Comment: 20 pages, 1 figur

    Information Distance Revisited

    Get PDF

    Kolmogorov complexity and the Recursion Theorem

    Full text link
    Several classes of DNR functions are characterized in terms of Kolmogorov complexity. In particular, a set of natural numbers A can wtt-compute a DNR function iff there is a nontrivial recursive lower bound on the Kolmogorov complexity of the initial segments of A. Furthermore, A can Turing compute a DNR function iff there is a nontrivial A-recursive lower bound on the Kolmogorov complexity of the initial segements of A. A is PA-complete, that is, A can compute a {0,1}-valued DNR function, iff A can compute a function F such that F(n) is a string of length n and maximal C-complexity among the strings of length n. A solves the halting problem iff A can compute a function F such that F(n) is a string of length n and maximal H-complexity among the strings of length n. Further characterizations for these classes are given. The existence of a DNR function in a Turing degree is equivalent to the failure of the Recursion Theorem for this degree; thus the provided results characterize those Turing degrees in terms of Kolmogorov complexity which do no longer permit the usage of the Recursion Theorem.Comment: Full version of paper presented at STACS 2006, Lecture Notes in Computer Science 3884 (2006), 149--16

    Game interpretation of Kolmogorov complexity

    Full text link
    The Kolmogorov complexity function K can be relativized using any oracle A, and most properties of K remain true for relativized versions. In section 1 we provide an explanation for this observation by giving a game-theoretic interpretation and showing that all "natural" properties are either true for all sufficiently powerful oracles or false for all sufficiently powerful oracles. This result is a simple consequence of Martin's determinacy theorem, but its proof is instructive: it shows how one can prove statements about Kolmogorov complexity by constructing a special game and a winning strategy in this game. This technique is illustrated by several examples (total conditional complexity, bijection complexity, randomness extraction, contrasting plain and prefix complexities).Comment: 11 pages. Presented in 2009 at the conference on randomness in Madison

    Around Kolmogorov complexity: basic notions and results

    Full text link
    Algorithmic information theory studies description complexity and randomness and is now a well known field of theoretical computer science and mathematical logic. There are several textbooks and monographs devoted to this theory where one can find the detailed exposition of many difficult results as well as historical references. However, it seems that a short survey of its basic notions and main results relating these notions to each other, is missing. This report attempts to fill this gap and covers the basic notions of algorithmic information theory: Kolmogorov complexity (plain, conditional, prefix), Solomonoff universal a priori probability, notions of randomness (Martin-L\"of randomness, Mises--Church randomness), effective Hausdorff dimension. We prove their basic properties (symmetry of information, connection between a priori probability and prefix complexity, criterion of randomness in terms of complexity, complexity characterization for effective dimension) and show some applications (incompressibility method in computational complexity theory, incompleteness theorems). It is based on the lecture notes of a course at Uppsala University given by the author

    Limit complexities revisited [once more]

    Get PDF
    The main goal of this article is to put some known results in a common perspective and to simplify their proofs. We start with a simple proof of a result of Vereshchagin saying that lim supnC(xn)\limsup_n C(x|n) equals C0(x)C^{0'}(x). Then we use the same argument to prove similar results for prefix complexity, a priori probability on binary tree, to prove Conidis' theorem about limits of effectively open sets, and also to improve the results of Muchnik about limit frequencies. As a by-product, we get a criterion of 2-randomness proved by Miller: a sequence XX is 2-random if and only if there exists cc such that any prefix xx of XX is a prefix of some string yy such that C(y)ycC(y)\ge |y|-c. (In the 1960ies this property was suggested in Kolmogorov as one of possible randomness definitions.) We also get another 2-randomness criterion by Miller and Nies: XX is 2-random if and only if C(x)xcC(x)\ge |x|-c for some cc and infinitely many prefixes xx of XX. This is a modified version of our old paper that contained a weaker (and cumbersome) version of Conidis' result, and the proof used low basis theorem (in quite a strange way). The full version was formulated there as a conjecture. This conjecture was later proved by Conidis. Bruno Bauwens (personal communication) noted that the proof can be obtained also by a simple modification of our original argument, and we reproduce Bauwens' argument with his permission.Comment: See http://arxiv.org/abs/0802.2833 for the old pape

    Kolmogorov's Structure Functions and Model Selection

    Full text link
    In 1974 Kolmogorov proposed a non-probabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The ``structure function'' of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the ``true'' model is in the model class considered or not. In this setting, this happens {\em with certainty}, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that--within the obvious constraints--every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the ``algorithmic minimal sufficient statistic.''Comment: 25 pages LaTeX, 5 figures. In part in Proc 47th IEEE FOCS; this final version (more explanations, cosmetic modifications) to appear in IEEE Trans Inform T
    corecore