5,960 research outputs found
Bayesian definition of random sequences with respect to conditional probabilities
We study Martin-L\"{o}f random (ML-random) points on computable probability
measures on sample and parameter spaces (Bayes models). We consider four
variants of conditional random sequences with respect to the conditional
distributions: two of them are defined by ML-randomness on Bayes models and the
others are defined by blind tests for conditional distributions. We consider a
weak criterion for conditional ML-randomness and show that only variants of
ML-randomness on Bayes models satisfy the criterion. We show that these four
variants of conditional randomness are identical when the conditional
probability measure is computable and the posterior distribution converges
weakly to almost all parameters. We compare ML-randomness on Bayes models with
randomness for uniformly computable parametric models. It is known that two
computable probability measures are orthogonal if and only if their ML-random
sets are disjoint. We extend these results for uniformly computable parametric
models. Finally, we present an algorithmic solution to a classical problem in
Bayes statistics, i.e.~the posterior distributions converge weakly to almost
all parameters if and only if the posterior distributions converge weakly to
all ML-random parameters.Comment: revised versio
- …