56,029 research outputs found

    Topological arguments for Kolmogorov complexity

    Full text link
    We present several application of simple topological arguments in problems of Kolmogorov complexity. Basically we use the standard fact from topology that the disk is simply connected. It proves to be enough to construct strings with some nontrivial algorithmic properties.Comment: Extended versio

    Causal inference using the algorithmic Markov condition

    Full text link
    Inferring the causal structure that links n observables is usually based upon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present. We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information and describe the corresponding causal inference rules. We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work. We also discuss how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution.Comment: 16 figure

    Algorithmic Complexity Bounds on Future Prediction Errors

    Get PDF
    We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor MM from the true distribution mumu by the algorithmic complexity of mumu. Here we assume we are at a time t>1t>1 and already observed x=x1...xtx=x_1...x_t. We bound the future prediction performance on xt+1xt+2...x_{t+1}x_{t+2}... by a new variant of algorithmic complexity of mumu given xx, plus the complexity of the randomness deficiency of xx. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.Comment: 21 page
    • …
    corecore