6 research outputs found

    Lower bounds on the redundancy in computations from random oracles via betting strategies with restricted wagers

    Get PDF
    The Kučera–GĂĄcs theorem is a landmark result in algorithmic randomness asserting that every real is computable from a Martin-Löf random real. If the computation of the first n bits of a sequence requires n+h(n) bits of the random oracle, then h is the redundancy of the computation. Kučera implicitly achieved redundancy nlog⁥n while GĂĄcs used a more elaborate coding procedure which achieves redundancy View the MathML source. A similar bound is implicit in the later proof by Merkle and Mihailović. In this paper we obtain optimal strict lower bounds on the redundancy in computations from Martin-Löf random oracles. We show that any nondecreasing computable function g such that ∑n2−g(n)=∞ is not a general upper bound on the redundancy in computations from Martin-Löf random oracles. In fact, there exists a real X such that the redundancy g of any computation of X from a Martin-Löf random oracle satisfies ∑n2−g(n)<∞. Moreover, the class of such reals is comeager and includes a View the MathML source real as well as all weakly 2-generic reals. On the other hand, it has been recently shown that any real is computable from a Martin-Löf random oracle with redundancy g, provided that g is a computable nondecreasing function such that ∑n2−g(n)<∞. Hence our lower bound is optimal, and excludes many slow growing functions such as log⁥n from bounding the redundancy in computations from random oracles for a large class of reals. Our results are obtained as an application of a theory of effective betting strategies with restricted wagers which we develop

    Optimal redundancy in computations from random oracles

    Get PDF
    It is a classic result in algorithmic information theory that every infinite binary sequence is computable from an infinite binary sequence which is random in the sense of Martin-Löf. Proved independently by Kuˇcera [Kuˇc85] and GĂĄcs [GĂĄc86], this result answered a question by Charles Bennett and has seen numerous applications in the last 30 years. The optimal redundancy in such a coding process has, however, remained unknown. If the computation of the first n bits of a sequence requires n+g(n) bits of the random oracle, then g is the redundancy of the computation. Kuˇcera implicitly achieved redundancy n log n while GĂĄcs used a more elaborate block-coding procedure which achieved redundancy √n log n. Merkle and MihailoviÂŽc [MM04] provided a different presentation of GĂĄcs’ approach, without improving his redundancy bound. In this paper we devise a new coding method that achieves optimal logarithmic redundancy. For any computable non-decreasing function g such that Pi 2−g(i) is bounded we show that there is a coding process that codes any given infinite binary sequence into a Martin-Löf random infinite binary sequence with redundancy g. This redundancy bound is exponentially smaller than the previous bound of √n log n and is known to be the best possible by recent work [BLPT16], where it was shown that if Pi 2−g(i) diverges then there exists an infinite binary sequence X which cannot be computed by any Martin-Löf random infinite binary sequence with redundancy g. It follows that redundancy Ç« · log n in computation from a random oracle is possible for every infinite binary sequence, if and only if Ç« > 1

    Restricted Coding and Betting

    Get PDF
    One of the fundamental themes in the study of computability theory are oracle computations, i.e. the coding of one infinite binary sequence into another. A coding process where the prefixes of the coded sequence are coded such that the length difference of the coded and the coding prefix is bounded by a constant is known as cl-reducibility. This reducibility has received considerable attention over the last two decades due to its interesting degree structure and because it exhibits strong connections with algorithmic randomness. In the first part of this dissertation, we study a slightly relaxed version of cl-reducibility where the length difference is required to be bounded by some specific nondecreasing computable function~hh. We show that in this relaxed model some of the classical results about cl-reducibility still hold in case the function hh grows slowly, at certain particular rates. Examples are the Yu-Ding theorem, which states that there is a pair of left-c.e. sequences that cannot be coded simultaneously by any left-c.e. sequence, as well as the Barmpalias-Lewis theorem that states that there is a left-c.e. sequence which cannot be coded by any random left-c.e. sequence. In case the bounding function~hh grows too fast, both results don't hold anymore. Betting strategies, which can be formulated equivalently in terms of martingales, are one of the main tools in the area of algorithmic randomness. A betting strategy is usually determined by two factors, the guessed outcome at every stage and the wager on it. In the second part of this dissertation we study betting strategies where one of these factors is restricted. First we study single-sided strategies, where the guessed outcome either is always 0 or is always 1. For computable strategies we show that single-sided strategies and usual strategies have the same power for winning, whereas the latter does not hold for strongly left-c.e. strategies, which are mixtures of computable strategies, even if we extend the class of single-sided strategies to the more general class of decidably-sided strategies. Finally, we study the case where the wagers are forced to have a certain granularity, i.e. must be multiples of some not necessarily constant betting unit. For usual strategies, wins can always be assumed to have the two following properties (a) ‘win with arbitrarily small initial capital’ and (b) ‘win by saving’. In a setting of variable granularity, where the betting unit shrinks over stages, we study how the shrinking rates interact with these two properties. We show that if the granularity shrinks fast, at certain particular rates,for such granular strategies both properties are preserved. For slower rates of shrinking, we show that neither property is preserved completely, however, a weaker version of property (a) still holds. In order to investigate property (b) in this case, we consider more restricted strategies where in addition the wager is bounded from above

    Compression of data streams down to their information content

    Get PDF
    According to the Kolmogorov complexity, every finite binary string is compressible to a shortest code-its information content-from which it is effectively recoverable. We investigate the extent to which this holds for the infinite binary sequences (streams). We devise a new coding method that uniformly codes every stream X into an algorithmically random stream Y , in such a way that the first n bits of X are recoverable from the first I(X \upharpoonright -{n}) bits of Y , where I is any partial computable information content measure that is defined on all prefixes of X , and where X \upharpoonright -{n} is the initial segment of X of length n. As a consequence, if g is any computable upper bound on the initial segment prefix-free complexity of X , then X is computable from an algorithmically random Y with oracle-use at most g. Alternatively (making no use of such a computable bound g ), one can achieve an the oracle-use bounded above by K(X \upharpoonright -{n})+\log n. This provides a strong analogue of Shannon's source coding theorem for the algorithmic information theory
    corecore