12 research outputs found

    Depth as Randomness Deficiency

    Get PDF
    Depth of an object concerns a tradeoff between computation time and excess of program length over the shortest program length required to obtain the object. It gives an unconditional lower bound on the computation time from a given program in absence of auxiliary information. Variants known as logical depth and computational depth are expressed in Kolmogorov complexity theory. We derive quantitative relation between logical depth and computational depth and unify the different depth notions by relating them to A. Kolmogorov and L. Levin’s fruitful notion of randomness deficiency. Subsequently, we revisit the computational depth of infinite strings, study the notion of super deep sequences and relate it with other approaches

    Depth as randomness deficiency

    Get PDF

    Computing and evolving variants of computational depth

    Get PDF
    The structure and organization of information in binary strings and (infinite) binary sequences are investigated using two computable measures of complexity related to computational depth. First, fundamental properties of recursive computational depth, a refinement of Bennett\u27s original notion of computational depth, are developed, and it is shown that the recursively weakly (respectively, strongly) deep sequences form a proper subclass of the class of weakly (respectively, strongly) deep sequences. It is then shown that every weakly useful sequence is recursively strongly deep, strengthening a theorem by Juedes, Lathrop, and Lutz. It follows from these results that not every strongly deep sequence is weakly useful, thereby answering an open question posed by Juedes;Second, compression depth, a feasibly computable depth measurement, is developed based on the Lempel-Ziv compression algorithm. LZ compression depth is further formalized by introducing strongly (compression) deep sequences and showing that analogues of the main properties of computational depth hold for compression depth. Critical to these results, it is shown that a sequence that is not normal must be compressible by the Lempei-Ziv algorithm. This yields a new, simpler proof that the Champernowne sequence is normal;Compression depth is also used to measure the organization of genes in genetic algorithms. Using finite-state machines to control the actions of an automaton playing prisoner\u27s dilemma, a genetic algorithm is used to evolve a population of finite-state machines (players) to play prisoner\u27s dilemma against each other. Since the fitness function is based solely on how well a player performs against all other players in the population, any accumulation of compression depth (organization) in the genetic structure of the player can only by attributed to the fact that more fit players have a more highly organized genetic structure. It is shown experimentally that this is the case

    Algorithmic Randomness

    Get PDF
    We consider algorithmic randomness in the Cantor space C of the infinite binary sequences. By an algorithmic randomness concept one specifies a set of elements of C, each of which is assigned the property of being random. Miscellaneous notions from computability theory are used in the definitions of randomness concepts that are essentially rooted in the following three intuitive randomness requirements: the initial segments of a random sequence should be effectively incompressible, no random sequence should be an element of an effective measure null set containing sequences with an “exceptional property”, and finally, considering betting games, in which the bits of a sequence are guessed successively, there should be no effective betting strategy that helps a player win an unbounded amount of capital on a random sequence. For various formalizations of these requirements one uses versions of Kolmogorov complexity, of tests, and of martingales, respectively. In case any of these notions is used in the definition of a randomness concept, one may ask in general for fundamental equivalent definitions in terms of the respective other two notions. This was a long-standing open question w.r.t. computable randomness, a central concept that had been introduced by Schnorr via martingales. In this thesis, we introduce bounded tests that we use to give a characterization of computable randomness in terms of tests. Our result was obtained independently of the prior test characterization of computable randomness due to Downey, Griffiths, and LaForte, who defined graded tests for their result. Based on bounded tests, we define bounded machines which give rise to a version of Kolmogorov complexity that we use to prove another characterization of computable randomness. This result, as in analog situations, allows for the introduction of interesting lowness and triviality properties that are, roughly speaking, “anti-randomness” properties. We define and study the notions lowness for bounded machines and bounded triviality. Using a theorem due to Nies, it can be shown that only the computable sequences are low for bounded machines. Further we show some interesting properties of bounded machines, and we demonstrate that every boundedly trivial sequence is K-trivial. Furthermore we define lowness for computable machines, a lowness notion in the setting of Schnorr randomness. We prove that a sequence is low for computable machines if and only if it is computably traceable. Gacs and independently Kucera proved a central theorem which states that every sequence is effectively decodable from a suitable Martin-Löf random sequence. We present a somewhat easier proof of this theorem, where we construct a sequence with the required property by diagonalizing against appropriate martingales. By a variant of that construction we prove that there exists a computably random sequence that is weak truth-table autoreducible. Further, we show that a sequence is computably enumerable self-reducible if and only if its associated real is computably enumerable. Finally we investigate interrelations between the Lebesgue measure and effective measures on C. We prove the following extension of a result due to Book, Lutz, and Wagner: A union of Pi-0-1 classes that is closed under finite variations has Lebesgue measure zero if and only if it contains no Kurtz random real. However we demonstrate that even a Sigma-0-2 class with Lebesgue measure zero need not be a Kurtz null class. Turning to Almost classes, we show among other things that every Almost class with respect to a bounded reducibility has computable packing dimension zero

    The complexity and distribution of computationally useful problems

    Get PDF
    The solutions of certain natural decision problems such as the halting problem and the boolean satisfiability problem contain large amounts of useful information about computation that is highly organized and readily available to efficient computational processes. Such problems are computationally useful. This dissertation investigates the complexity and distribution of these computationally useful problems. The main results of this dissertation are of the following three general types. (1) Useful problems contain highly organized information. (2) Very useful problems are so highly organized that they are unusually simple and hence rare. (3) Useful problems are, as a whole, not rare and thus are not necessarily simple;A result of type (1) is proven in Chapter 3. Bennett recently extended algorithmic information theory to include a notion of computational depth that appears to quantify the level of organization in binary strings and sequences. The main result of Chapter 3 states that every weakly useful sequence is strongly deep. (A sequence x is weakly useful if a non-negligible set of recursive problems are decidable within a fixed recursive time bound when given access to x.);Results of type (2) are presented in Chapters 4 and 5. These results say that the ≤[subscript]sp m P-complete problems for E = DTIME(2[superscript] linear) and the ≤[subscript]sp m p/poly-complete problems for ESPACE = DSPACE(2[superscript] linear) are unusually simple and hence rare. Complete problems are very useful because every problem in E or ESPACE is efficiently decidable when given access to one of these problems;Chapter 6 develops a result of type (3). This result says that the weakly ≤[subscript]sp m P-complete problems for E and ESPACE are not rare and hence are not necessarily simple. Weakly complete problems are useful because every problem in a non-negligible subset of E or ESPACE is efficiently decidable when given access to one of these problems;The above results (and others along the way) are obtained through a systematic investigation of the measure-theoretic structure of complexity classes

    Weakly Useful Sequences

    Get PDF
    An infinite binary sequence x is defined to be (i) strongly useful if there is a computable time bound within which every decidable sequence is Turing reducible to x; and (ii) weakly useful if there is a computable time bound within which all the sequences in a non-measure 0 subset of the set of decidable sequences are Turing reducible to x. Juedes, Lathrop, and Lutz (1994) proved that every weakly useful sequence is strongly deep in the sense of Bennett (1988) and asked whether there are sequences that are weakly useful but not strongly useful. The present paper answers this question affirmatively. The proof is a direct construction that combines the martingale diagonalization technique of Lutz (1994) with a new technique, namely, the construction of a sequence that is “computably deep” with respect to an arbitrary, given uniform reducibility. The abundance of such computably deep sequences is also proven and used to show that every weakly useful sequence is computably deep with respect to every uniform reducibility

    Weakly Useful Sequences

    No full text
    . An infinite binary sequence x is defined to be 1. strongly useful if there is a recursive time bound within which every recursive sequence is Turing reducible to x; and 2. weakly useful if there is a recursive time bound within which all the sequences in a non-measure 0 subset of the set of recursive sequences are Turing reducible to x. Juedes, Lathrop, and Lutz (1994) proved that every weakly useful sequence is strongly deep in the sense of Bennett (1988) and asked whether there are sequences that are weakly useful but not strongly useful. The present paper answers this question affirmatively. The proof is a direct construction that combines the recent martingale diagonalization technique of Lutz (1994) with a new technique, namely, the construction of a sequence that is "recursively deep" with respect to an arbitrary, given uniform reducibility. The abundance of such recursively deep sequences is also proven and used to show that every weakly useful sequence is recursively deep ..
    corecore