33 research outputs found

    Restricted Coding and Betting

    Get PDF
    One of the fundamental themes in the study of computability theory are oracle computations, i.e. the coding of one infinite binary sequence into another. A coding process where the prefixes of the coded sequence are coded such that the length difference of the coded and the coding prefix is bounded by a constant is known as cl-reducibility. This reducibility has received considerable attention over the last two decades due to its interesting degree structure and because it exhibits strong connections with algorithmic randomness. In the first part of this dissertation, we study a slightly relaxed version of cl-reducibility where the length difference is required to be bounded by some specific nondecreasing computable function~hh. We show that in this relaxed model some of the classical results about cl-reducibility still hold in case the function hh grows slowly, at certain particular rates. Examples are the Yu-Ding theorem, which states that there is a pair of left-c.e. sequences that cannot be coded simultaneously by any left-c.e. sequence, as well as the Barmpalias-Lewis theorem that states that there is a left-c.e. sequence which cannot be coded by any random left-c.e. sequence. In case the bounding function~hh grows too fast, both results don't hold anymore. Betting strategies, which can be formulated equivalently in terms of martingales, are one of the main tools in the area of algorithmic randomness. A betting strategy is usually determined by two factors, the guessed outcome at every stage and the wager on it. In the second part of this dissertation we study betting strategies where one of these factors is restricted. First we study single-sided strategies, where the guessed outcome either is always 0 or is always 1. For computable strategies we show that single-sided strategies and usual strategies have the same power for winning, whereas the latter does not hold for strongly left-c.e. strategies, which are mixtures of computable strategies, even if we extend the class of single-sided strategies to the more general class of decidably-sided strategies. Finally, we study the case where the wagers are forced to have a certain granularity, i.e. must be multiples of some not necessarily constant betting unit. For usual strategies, wins can always be assumed to have the two following properties (a) ‘win with arbitrarily small initial capital’ and (b) ‘win by saving’. In a setting of variable granularity, where the betting unit shrinks over stages, we study how the shrinking rates interact with these two properties. We show that if the granularity shrinks fast, at certain particular rates,for such granular strategies both properties are preserved. For slower rates of shrinking, we show that neither property is preserved completely, however, a weaker version of property (a) still holds. In order to investigate property (b) in this case, we consider more restricted strategies where in addition the wager is bounded from above

    Asymmetry of the Kolmogorov complexity of online predicting odd and even bits

    Get PDF
    Symmetry of information states that C(x)+C(yx)=C(x,y)+O(logC(x))C(x) + C(y|x) = C(x,y) + O(\log C(x)). We show that a similar relation for online Kolmogorov complexity does not hold. Let the even (online Kolmogorov) complexity of an n-bitstring x1x2...xnx_1x_2... x_n be the length of a shortest program that computes x2x_2 on input x1x_1, computes x4x_4 on input x1x2x3x_1x_2x_3, etc; and similar for odd complexity. We show that for all n there exist an n-bit x such that both odd and even complexity are almost as large as the Kolmogorov complexity of the whole string. Moreover, flipping odd and even bits to obtain a sequence x2x1x4x3x_2x_1x_4x_3\ldots, decreases the sum of odd and even complexity to C(x)C(x).Comment: 20 pages, 7 figure

    Algorithmic Randomness and Complexity

    Full text link

    Randomized and Exchangeable Improvements of Markov's, Chebyshev's and Chernoff's Inequalities

    Full text link
    We present simple randomized and exchangeable improvements of Markov's inequality, as well as Chebyshev's inequality and Chernoff bounds. Our variants are never worse and typically strictly more powerful than the original inequalities. The proofs are short and elementary, and can easily yield similarly randomized or exchangeable versions of a host of other inequalities that employ Markov's inequality as an intermediate step. We point out some simple statistical applications involving tests that combine dependent e-values. In particular, we uniformly improve the power of universal inference, and obtain tighter betting-based nonparametric confidence intervals. Simulations reveal nontrivial gains in power (and no losses) in a variety of settings

    Harnessing The Collective Wisdom: Fusion Learning Using Decision Sequences From Diverse Sources

    Full text link
    Learning from the collective wisdom of crowds enhances the transparency of scientific findings by incorporating diverse perspectives into the decision-making process. Synthesizing such collective wisdom is related to the statistical notion of fusion learning from multiple data sources or studies. However, fusing inferences from diverse sources is challenging since cross-source heterogeneity and potential data-sharing complicate statistical inference. Moreover, studies may rely on disparate designs, employ widely different modeling techniques for inferences, and prevailing data privacy norms may forbid sharing even summary statistics across the studies for an overall analysis. In this paper, we propose an Integrative Ranking and Thresholding (IRT) framework for fusion learning in multiple testing. IRT operates under the setting where from each study a triplet is available: the vector of binary accept-reject decisions on the tested hypotheses, the study-specific False Discovery Rate (FDR) level and the hypotheses tested by the study. Under this setting, IRT constructs an aggregated, nonparametric, and discriminatory measure of evidence against each null hypotheses, which facilitates ranking the hypotheses in the order of their likelihood of being rejected. We show that IRT guarantees an overall FDR control under arbitrary dependence between the evidence measures as long as the studies control their respective FDR at the desired levels. Furthermore, IRT synthesizes inferences from diverse studies irrespective of the underlying multiple testing algorithms employed by them. While the proofs of our theoretical statements are elementary, IRT is extremely flexible, and a comprehensive numerical study demonstrates that it is a powerful framework for pooling inferences.Comment: 29 pages and 10 figures. Under review at a journa
    corecore