36,578 research outputs found

    Kolmogorov complexity in perspective

    Get PDF
    We survey the diverse approaches to the notion of information content: from Shannon entropy to Kolmogorov complexity. The main applications of Kolmogorov complexity are presented namely, the mathematical notion of randomness (which goes back to the 60's with the work of Martin-Lof, Schnorr, Chaitin, Levin), and classification, which is a recent idea with provocative implementation by Vitanyi and Cilibrasi.Comment: 37 page

    Kolmogorov Complexity in perspective. Part I: Information Theory and Randomnes

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts in the same volume. Part I is dedicated to information theory and the mathematical formalization of randomness based on Kolmogorov complexity. This last application goes back to the 60's and 70's with the work of Martin-L\"of, Schnorr, Chaitin, Levin, and has gained new impetus in the last years.Comment: 40 page

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page

    Numerical search for a fundamental theory

    Full text link
    We propose a numerical test of fundamental physics based on the complexity measure of a general set of functions, which is directly related to the Kolmogorov (or algorithmic) complexity studied in mathematics and computer science. The analysis can be carried out for any scientific experiment and might lead to a better understanding of the underlying theory. From a cosmological perspective, the anthropic description of fundamental constants can be explicitly tested by our procedure. We perform a simple numerical search by analyzing two fundamental constants: the weak coupling constant and the Weinberg angle, and find that their values are rather atypical.Comment: 6 pages, 3 figures, RevTeX, expansion and clarification, references adde

    Set theoretical Representations of Integers, I

    Get PDF
    We reconsider some classical natural semantics of integers (namely iterators of functions, cardinals of sets, index of equivalence relations), in the perspective of Kolmogorov complexity. To each such semantics one can attach a simple representation of integers that we suitably effectivize in order to develop an associated Kolmogorov theory. Such effectivizations are particular instances of a general notion of "self-enumerated system" that we introduce in this paper. Our main result asserts that, with such effectivizations, Kolmogorov theory allows to quantitatively distinguish the underlying semantics. We characterize the families obtained by such effectivizations and prove that the associated Kolmogorov complexities constitute a hierarchy which coincides with that of Kolmogorov complexities defined via jump oracles and/or infinite computations. This contrasts with the well-known fact that usual Kolmogorov complexity does not depend (up to a constant) on the chosen arithmetic representation of integers, let it be in any base unary, binary et so on. Also, in a conceptual point of view, our result can be seen as a mean to measure the degree of abstraction of these diverse semantics.Comment: 56 page

    Quantum Kolmogorov complexity and quantum correlations in deterministic-control quantum Turing machines

    Full text link
    This work presents a study of Kolmogorov complexity for general quantum states from the perspective of deterministic-control quantum Turing Machines (dcq-TM). We extend the dcq-TM model to incorporate mixed state inputs and outputs, and define dcq-computable states as those that can be approximated by a dcq-TM. Moreover, we introduce (conditional) Kolmogorov complexity of quantum states and use it to study three particular aspects of the algorithmic information contained in a quantum state: a comparison of the information in a quantum state with that of its classical representation as an array of real numbers, an exploration of the limits of quantum state copying in the context of algorithmic complexity, and study of the complexity of correlations in quantum systems, resulting in a correlation-aware definition for algorithmic mutual information that satisfies symmetry of information property.Comment: 31 page

    Limit complexities revisited

    Get PDF
    The main goal of this paper is to put some known results in a common perspective and to simplify their proofs. We start with a simple proof of a result from (Vereshchagin, 2002) saying that \limsup_n\KS(x|n) (here \KS(x|n) is conditional (plain) Kolmogorov complexity of xx when nn is known) equals \KS^{\mathbf{0'}(x), the plain Kolmogorov complexity with \mathbf{0'-oracle. Then we use the same argument to prove similar results for prefix complexity (and also improve results of (Muchnik, 1987) about limit frequencies), a priori probability on binary tree and measure of effectively open sets. As a by-product, we get a criterion of 0′\mathbf{0'} Martin-L\"of randomness (called also 2-randomness) proved in (Miller, 2004): a sequence ω\omega is 2-random if and only if there exists cc such that any prefix xx of ω\omega is a prefix of some string yy such that \KS(y)\ge |y|-c. (In the 1960ies this property was suggested in (Kolmogorov, 1968) as one of possible randomness definitions; its equivalence to 2-randomness was shown in (Miller, 2004) while proving another 2-randomness criterion (see also (Nies et al. 2005)): ω\omega is 2-random if and only if \KS(x)\ge |x|-c for some cc and infinitely many prefixes xx of ω\omega. Finally, we show that the low-basis theorem can be used to get alternative proofs for these results and to improve the result about effectively open sets; this stronger version implies the 2-randomness criterion mentioned in the previous sentence

    Kolmogorov Complexity of Categories

    Full text link
    Kolmogorov complexity theory is used to tell what the algorithmic informational content of a string is. It is defined as the length of the shortest program that describes the string. We present a programming language that can be used to describe categories, functors, and natural transformations. With this in hand, we define the informational content of these categorical structures as the shortest program that describes such structures. Some basic consequences of our definition are presented including the fact that equivalent categories have equal Kolmogorov complexity. We also prove different theorems about what can and cannot be described by our programming language.Comment: 16 page

    The Equivalence of Sampling and Searching

    Get PDF
    In a sampling problem, we are given an input x, and asked to sample approximately from a probability distribution D_x. In a search problem, we are given an input x, and asked to find a member of a nonempty set A_x with high probability. (An example is finding a Nash equilibrium.) In this paper, we use tools from Kolmogorov complexity and algorithmic information theory to show that sampling and search problems are essentially equivalent. More precisely, for any sampling problem S, there exists a search problem R_S such that, if C is any "reasonable" complexity class, then R_S is in the search version of C if and only if S is in the sampling version. As one application, we show that SampP=SampBQP if and only if FBPP=FBQP: in other words, classical computers can efficiently sample the output distribution of every quantum circuit, if and only if they can efficiently solve every search problem that quantum computers can solve. A second application is that, assuming a plausible conjecture, there exists a search problem R that can be solved using a simple linear-optics experiment, but that cannot be solved efficiently by a classical computer unless the polynomial hierarchy collapses. That application will be described in a forthcoming paper with Alex Arkhipov on the computational complexity of linear optics.Comment: 16 page
    • …
    corecore