219 research outputs found
Universality, optimality, and randomness deficiency
A Martin-Löf test UU is universal if it captures all non-Martin-Löf random sequences, and it is optimal if for every ML-test VV there is a c∈ωc∈ω such that ∀n(Vn+c⊆Un)∀n(Vn+c⊆Un). We study the computational differences between universal and optimal ML-tests as well as the effects that these differences have on both the notion of layerwise computability and the Weihrauch degree of LAYLAY, the function that produces a bound for a given Martin-Löf random sequence's randomness deficiency. We prove several robustness and idempotence results concerning the Weihrauch degree of LAYLAY, and we show that layerwise computability is more restrictive than Weihrauch reducibility to LAYLAY. Along similar lines we also study the principle RDRD, a variant of LAYLAY outputting the precise randomness deficiency of sequences instead of only an upper bound as LAYLAY
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations.
From the philosophical point of view, it tries to formalize how the statistics
works and why some statistical models are better than others. After this notion
of a "good model" is introduced, a natural question arises: it is possible that
for some piece of data there is no good model? If yes, how often these bad
("non-stochastic") data appear "in real life"?
Another, more technical motivation comes from algorithmic information theory.
In this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some
number, called its algorithmic complexity (or Kolmogorov complexity).
Algorithmic statistic provides a more fine-grained classification: for each
finite object some curve is defined that characterizes its behavior. It turns
out that several different definitions give (approximately) the same curve.
In this survey we try to provide an exposition of the main results in the
field (including full proofs for the most important ones), as well as some
historical comments. We assume that the reader is familiar with the main
notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde
Algorithmic Complexity Bounds on Future Prediction Errors
We bound the future loss when predicting any (computably) stochastic sequence
online. Solomonoff finitely bounded the total deviation of his universal
predictor from the true distribution by the algorithmic complexity of
. Here we assume we are at a time and already observed .
We bound the future prediction performance on by a new
variant of algorithmic complexity of given , plus the complexity of the
randomness deficiency of . The new complexity is monotone in its condition
in the sense that this complexity can only decrease if the condition is
prolonged. We also briefly discuss potential generalizations to Bayesian model
classes and to classification problems.Comment: 21 page
Algorithmic randomness and layerwise computability
International audienceIn this article we present the framework of layerwise computability. We explain the origin of this notion, its main features and properties, and we illustrate it with several concrete examples: decomposition of measures, random closed sets, Brownian motion
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
Weihrauch-completeness for layerwise computability
We introduce the notion of being Weihrauch-complete for layerwise computability and provide several natural examples related to complex oscillations, the law of the iterated logarithm and Birkhoff's theorem. We also consider hitting time operators, which share the Weihrauch degree of the former examples but fail to be layerwise computable
Computable Measure Theory and Algorithmic Randomness
International audienceWe provide a survey of recent results in computable measure and probability theory, from both the perspectives of computable analysis and algorithmic randomness, and discuss the relations between them
- …