8 research outputs found

    Predictions and algorithmic statistics for infinite sequence

    Full text link
    Consider the following prediction problem. Assume that there is a block box that produces bits according to some unknown computable distribution on the binary tree. We know first nn bits x1x2xnx_1 x_2 \ldots x_n. We want to know the probability of the event that that the next bit is equal to 11. Solomonoff suggested to use universal semimeasure mm for solving this task. He proved that for every computable distribution PP and for every b{0,1}b \in \{0,1\} the following holds: n=1x:l(x)=nP(x)(P(bx)m(bx))2< .\sum_{n=1}^{\infty}\sum_{x: l(x)=n} P(x) (P(b | x) - m(b | x))^2 < \infty\ . However, Solomonoff's method has a negative aspect: Hutter and Muchnik proved that there are an universal semimeasure mm, computable distribution PP and a random (in Martin-L{\"o}f sense) sequence x1x2x_1 x_2\ldots such that limnP(xn+1x1xn)m(xn+1x1xn)0\lim_{n \to \infty} P(x_{n+1} | x_1\ldots x_n) - m(x_{n+1} | x_1\ldots x_n) \nrightarrow 0. We suggest a new way for prediction. For every finite string xx we predict the new bit according to the best (in some sence) distribution for xx. We prove the similar result as Solomonoff theorem for our way of prediction. Also we show that our method of prediction has no that negative aspect as Solomonoff's method.Comment: 12 page

    On Martin-Löf convergence of Solomonoff’s mixture

    No full text
    We study the convergence of Solomonoff’s universal mixture on individual Martin-Löf random sequences. A new result is presented extending the work of Hutter and Muchnik (2004) by showing that there does not exist a universal mixture that converges on all Martin-Löf random sequences

    Algorithmic Complexity Bounds on Future Prediction Errors

    Get PDF
    We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor MM from the true distribution mumu by the algorithmic complexity of mumu. Here we assume we are at a time t>1t>1 and already observed x=x1...xtx=x_1...x_t. We bound the future prediction performance on xt+1xt+2...x_{t+1}x_{t+2}... by a new variant of algorithmic complexity of mumu given xx, plus the complexity of the randomness deficiency of xx. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.Comment: 21 page

    On Generalized Computable Universal Priors and their Convergence

    Full text link
    Solomonoff unified Occam's razor and Epicurus' principle of multiple explanations to one elegant, formal, universal theory of inductive inference, which initiated the field of algorithmic information theory. His central result is that the posterior of the universal semimeasure M converges rapidly to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal predictor in case of unknown mu. The first part of the paper investigates the existence and convergence of computable universal (semi)measures for a hierarchy of computability classes: recursive, estimable, enumerable, and approximable. For instance, M is known to be enumerable, but not estimable, and to dominate all enumerable semimeasures. We present proofs for discrete and continuous semimeasures. The second part investigates more closely the types of convergence, possibly implied by universality: in difference and in ratio, with probability 1, in mean sum, and for Martin-Loef random sequences. We introduce a generalized concept of randomness for individual sequences and use it to exhibit difficulties regarding these issues. In particular, we show that convergence fails (holds) on generalized-random sequences in gappy (dense) Bernoulli classes.Comment: 22 page

    On Universal Prediction and Bayesian Confirmation

    Get PDF
    The Bayesian framework is a well-studied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Strong total and weak instantaneous bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.Comment: 24 page

    Universal Convergence of Semimeasures on Individual Random Sequences

    No full text
    Solomonoff's central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior μ, if the latter is computable. Hence, M is eligible as a universal sequence p

    Universal convergence of semimeasures on individual random sequences, in

    No full text
    Solomonoff’s central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior µ, if the latter is computable. Hence, M is eligible as a universal sequence predictor in case of unknown µ. Despite some nearby results and proofs in the literature, the stronger result of convergence for all (Martin-Löf) random sequences remained open. Such a convergence result would be particularly interesting and natural, since randomness can be defined in terms of M itself. We show that there are universal semimeasures M which do not converge for all random sequences, i.e. we give a partial negative answer to the open problem. We also provide a positive answer for some non-universal semimeasures. We define the incomputable measure D as a mixture over all computable measures and the enumerable semimeasure W as a mixture over all enumerable nearly-measures. We show that W converges to D and D to µ on all random sequences. The Hellinger distance measuring closeness of two distributions plays a central role
    corecore