49 research outputs found

    Notes on sum-tests and independence tests

    Get PDF
    We study statistical sum-tests and independence tests, in particular for computably enumerable semimeasures on a discrete domain. Among other things, we prove that for universal semimeasures every Sigma0/1-sum-test is bounded, but unbounded Pi0/1-sum-tests exist, and we study to what extent the latter can be universal. For universal semimeasures, in the unary case of sum-test we leave open whether universal Pi0/1-sum-tests exist, whereas in the binary case of independence tests we prove that they do not exist

    Random semicomputable reals revisited

    Full text link
    The aim of this expository paper is to present a nice series of results, obtained in the papers of Chaitin (1976), Solovay (1975), Calude et al. (1998), Kucera and Slaman (2001). This joint effort led to a full characterization of lower semicomputable random reals, both as those that can be expressed as a "Chaitin Omega" and those that are maximal for the Solovay reducibility. The original proofs were somewhat involved; in this paper, we present these results in an elementary way, in particular requiring only basic knowledge of algorithmic randomness. We add also several simple observations relating lower semicomputable random reals and busy beaver functions.Comment: 15 page

    Algorithmic Complexity Bounds on Future Prediction Errors

    Get PDF
    We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor MM from the true distribution mumu by the algorithmic complexity of mumu. Here we assume we are at a time t>1t>1 and already observed x=x1...xtx=x_1...x_t. We bound the future prediction performance on xt+1xt+2...x_{t+1}x_{t+2}... by a new variant of algorithmic complexity of mumu given xx, plus the complexity of the randomness deficiency of xx. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.Comment: 21 page

    Sequential Predictions based on Algorithmic Complexity

    Get PDF
    This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's universal prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the "posterior" and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence convergence can be slow. In probabilistic environments, neither the posterior nor the losses converge, in general.Comment: 26 pages, LaTe

    On Martin-Löf convergence of Solomonoff’s mixture

    No full text
    We study the convergence of Solomonoff’s universal mixture on individual Martin-Löf random sequences. A new result is presented extending the work of Hutter and Muchnik (2004) by showing that there does not exist a universal mixture that converges on all Martin-Löf random sequences

    Asymptotics of Discrete MDL for Online Prediction

    Get PDF
    Minimum Description Length (MDL) is an important principle for induction and prediction, with strong relations to optimal Bayesian learning. This paper deals with learning non-i.i.d. processes by means of two-part MDL, where the underlying model class is countable. We consider the online learning framework, i.e. observations come in one by one, and the predictor is allowed to update his state of mind after each time step. We identify two ways of predicting by MDL for this setup, namely a static} and a dynamic one. (A third variant, hybrid MDL, will turn out inferior.) We will prove that under the only assumption that the data is generated by a distribution contained in the model class, the MDL predictions converge to the true values almost surely. This is accomplished by proving finite bounds on the quadratic, the Hellinger, and the Kullback-Leibler loss of the MDL learner, which are however exponentially worse than for Bayesian prediction. We demonstrate that these bounds are sharp, even for model classes containing only Bernoulli distributions. We show how these bounds imply regret bounds for arbitrary loss functions. Our results apply to a wide range of setups, namely sequence prediction, pattern classification, regression, and universal induction in the sense of Algorithmic Information Theory among others.Comment: 34 page
    corecore