72,794 research outputs found

    Bayesian definition of random sequences with respect to conditional probabilities

    Full text link
    We study Martin-L\"{o}f random (ML-random) points on computable probability measures on sample and parameter spaces (Bayes models). We consider four variants of conditional random sequences with respect to the conditional distributions: two of them are defined by ML-randomness on Bayes models and the others are defined by blind tests for conditional distributions. We consider a weak criterion for conditional ML-randomness and show that only variants of ML-randomness on Bayes models satisfy the criterion. We show that these four variants of conditional randomness are identical when the conditional probability measure is computable and the posterior distribution converges weakly to almost all parameters. We compare ML-randomness on Bayes models with randomness for uniformly computable parametric models. It is known that two computable probability measures are orthogonal if and only if their ML-random sets are disjoint. We extend these results for uniformly computable parametric models. Finally, we present an algorithmic solution to a classical problem in Bayes statistics, i.e.~the posterior distributions converge weakly to almost all parameters if and only if the posterior distributions converge weakly to all ML-random parameters.Comment: revised versio

    Universal Coding and Prediction on Martin-L\"of Random Points

    Full text link
    We perform an effectivization of classical results concerning universal coding and prediction for stationary ergodic processes over an arbitrary finite alphabet. That is, we lift the well-known almost sure statements to statements about Martin-L\"of random sequences. Most of this work is quite mechanical but, by the way, we complete a result of Ryabko from 2008 by showing that each universal probability measure in the sense of universal coding induces a universal predictor in the prequential sense. Surprisingly, the effectivization of this implication holds true provided the universal measure does not ascribe too low conditional probabilities to individual symbols. As an example, we show that the Prediction by Partial Matching (PPM) measure satisfies this requirement. In the almost sure setting, the requirement is superfluous.Comment: 12 page

    Limits of permutation sequences

    Get PDF
    A permutation sequence is said to be convergent if the density of occurrences of every fixed permutation in the elements of the sequence converges. We prove that such a convergent sequence has a natural limit object, namely a Lebesgue measurable function Z:[0,1]2→[0,1]Z:[0,1]^2 \to [0,1] with the additional properties that, for every fixed x∈[0,1]x \in [0,1], the restriction Z(x,⋅)Z(x,\cdot) is a cumulative distribution function and, for every y∈[0,1]y \in [0,1], the restriction Z(⋅,y)Z(\cdot,y) satisfies a "mass" condition. This limit process is well-behaved: every function in the class of limit objects is a limit of some permutation sequence, and two of these functions are limits of the same sequence if and only if they are equal almost everywhere. An ingredient in the proofs is a new model of random permutations, which generalizes previous models and might be interesting for its own sake.Comment: accepted for publication in the Journal of Combinatorial Theory, Series B. arXiv admin note: text overlap with arXiv:1106.166
    • 

    corecore