378 research outputs found
Complexity of complexity and strings with maximal plain and prefix Kolmogorov complexity
Peter Gacs showed (Gacs 1974) that for every n there exists a bit string x of
length n whose plain complexity C(x) has almost maximal conditional complexity
relative to x, i.e., C(C(x)|x) > log n - log^(2) n - O(1). (Here log^(2) i =
log log i.) Following Elena Kalinina (Kalinina 2011), we provide a simple
game-based proof of this result; modifying her argument, we get a better (and
tight) bound log n - O(1). We also show the same bound for prefix-free
complexity.
Robert Solovay showed (Solovay 1975) that infinitely many strings x have
maximal plain complexity but not maximal prefix complexity (among the strings
of the same length): for some c there exist infinitely many x such that |x| -
C(x) log^(2) |x| - c log^(3) |x|. In fact, the
results of Solovay and Gacs are closely related. Using the result above, we
provide a short proof for Solovay's result. We also generalize it by showing
that for some c and for all n there are strings x of length n with n - C (x) <
c and n + K(n) - K(x) > K(K(n)|n) - 3 K(K(K(n)|n)|n) - c. We also prove a close
upper bound K(K(n)|n) + O(1).
Finally, we provide a direct game proof for Joseph Miller's generalization
(Miller 2006) of the same Solovay's theorem: if a co-enumerable set (a set with
c.e. complement) contains for every length a string of this length, then it
contains infinitely many strings x such that |x| + K(|x|) - K(x) > log^(2) |x|
+ O(log^(3) |x|).Comment: 13 pages, 1 figur
Relating and contrasting plain and prefix Kolmogorov complexity
In [3] a short proof is given that some strings have maximal plain Kolmogorov
complexity but not maximal prefix-free complexity. The proof uses Levin's
symmetry of information, Levin's formula relating plain and prefix complexity
and Gacs' theorem that complexity of complexity given the string can be high.
We argue that the proof technique and results mentioned above are useful to
simplify existing proofs and to solve open questions.
We present a short proof of Solovay's result [21] relating plain and prefix
complexity: and , (here denotes , etc.).
We show that there exist such that is infinite and is
finite, i.e. the infinitely often C-trivial reals are not the same as the
infinitely often K-trivial reals (i.e. [1,Question 1]).
Solovay showed that for infinitely many we have
and , (here
denotes the length of and , etc.). We show that this
result holds for prefixes of some 2-random sequences.
Finally, we generalize our proof technique and show that no monotone relation
exists between expectation and probability bounded randomness deficiency (i.e.
[6, Question 1]).Comment: 20 pages, 1 figur
Kolmogorov complexity and the Recursion Theorem
Several classes of DNR functions are characterized in terms of Kolmogorov
complexity. In particular, a set of natural numbers A can wtt-compute a DNR
function iff there is a nontrivial recursive lower bound on the Kolmogorov
complexity of the initial segments of A. Furthermore, A can Turing compute a
DNR function iff there is a nontrivial A-recursive lower bound on the
Kolmogorov complexity of the initial segements of A. A is PA-complete, that is,
A can compute a {0,1}-valued DNR function, iff A can compute a function F such
that F(n) is a string of length n and maximal C-complexity among the strings of
length n. A solves the halting problem iff A can compute a function F such that
F(n) is a string of length n and maximal H-complexity among the strings of
length n. Further characterizations for these classes are given. The existence
of a DNR function in a Turing degree is equivalent to the failure of the
Recursion Theorem for this degree; thus the provided results characterize those
Turing degrees in terms of Kolmogorov complexity which do no longer permit the
usage of the Recursion Theorem.Comment: Full version of paper presented at STACS 2006, Lecture Notes in
Computer Science 3884 (2006), 149--16
Game interpretation of Kolmogorov complexity
The Kolmogorov complexity function K can be relativized using any oracle A,
and most properties of K remain true for relativized versions. In section 1 we
provide an explanation for this observation by giving a game-theoretic
interpretation and showing that all "natural" properties are either true for
all sufficiently powerful oracles or false for all sufficiently powerful
oracles. This result is a simple consequence of Martin's determinacy theorem,
but its proof is instructive: it shows how one can prove statements about
Kolmogorov complexity by constructing a special game and a winning strategy in
this game. This technique is illustrated by several examples (total conditional
complexity, bijection complexity, randomness extraction, contrasting plain and
prefix complexities).Comment: 11 pages. Presented in 2009 at the conference on randomness in
Madison
Around Kolmogorov complexity: basic notions and results
Algorithmic information theory studies description complexity and randomness
and is now a well known field of theoretical computer science and mathematical
logic. There are several textbooks and monographs devoted to this theory where
one can find the detailed exposition of many difficult results as well as
historical references. However, it seems that a short survey of its basic
notions and main results relating these notions to each other, is missing.
This report attempts to fill this gap and covers the basic notions of
algorithmic information theory: Kolmogorov complexity (plain, conditional,
prefix), Solomonoff universal a priori probability, notions of randomness
(Martin-L\"of randomness, Mises--Church randomness), effective Hausdorff
dimension. We prove their basic properties (symmetry of information, connection
between a priori probability and prefix complexity, criterion of randomness in
terms of complexity, complexity characterization for effective dimension) and
show some applications (incompressibility method in computational complexity
theory, incompleteness theorems). It is based on the lecture notes of a course
at Uppsala University given by the author
Limit complexities revisited [once more]
The main goal of this article is to put some known results in a common
perspective and to simplify their proofs.
We start with a simple proof of a result of Vereshchagin saying that
equals . Then we use the same argument to prove
similar results for prefix complexity, a priori probability on binary tree, to
prove Conidis' theorem about limits of effectively open sets, and also to
improve the results of Muchnik about limit frequencies. As a by-product, we get
a criterion of 2-randomness proved by Miller: a sequence is 2-random if and
only if there exists such that any prefix of is a prefix of some
string such that . (In the 1960ies this property was
suggested in Kolmogorov as one of possible randomness definitions.) We also get
another 2-randomness criterion by Miller and Nies: is 2-random if and only
if for some and infinitely many prefixes of .
This is a modified version of our old paper that contained a weaker (and
cumbersome) version of Conidis' result, and the proof used low basis theorem
(in quite a strange way). The full version was formulated there as a
conjecture. This conjecture was later proved by Conidis. Bruno Bauwens
(personal communication) noted that the proof can be obtained also by a simple
modification of our original argument, and we reproduce Bauwens' argument with
his permission.Comment: See http://arxiv.org/abs/0802.2833 for the old pape
Kolmogorov's Structure Functions and Model Selection
In 1974 Kolmogorov proposed a non-probabilistic approach to statistics and
model selection. Let data be finite binary strings and models be finite sets of
binary strings. Consider model classes consisting of models of given maximal
(Kolmogorov) complexity. The ``structure function'' of the given data expresses
the relation between the complexity level constraint on a model class and the
least log-cardinality of a model in the class containing the data. We show that
the structure function determines all stochastic properties of the data: for
every constrained model class it determines the individual best-fitting model
in the class irrespective of whether the ``true'' model is in the model class
considered or not. In this setting, this happens {\em with certainty}, rather
than with high probability as is in the classical case. We precisely quantify
the goodness-of-fit of an individual model with respect to individual data. We
show that--within the obvious constraints--every graph is realized by the
structure function of some data. We determine the (un)computability properties
of the various functions contemplated and of the ``algorithmic minimal
sufficient statistic.''Comment: 25 pages LaTeX, 5 figures. In part in Proc 47th IEEE FOCS; this final
version (more explanations, cosmetic modifications) to appear in IEEE Trans
Inform T
- …