9,155 research outputs found
Kolmogorov Random Graphs and the Incompressibility Method
We investigate topological, combinatorial, statistical, and enumeration
properties of finite graphs with high Kolmogorov complexity (almost all graphs)
using the novel incompressibility method. Example results are: (i) the mean and
variance of the number of (possibly overlapping) ordered labeled subgraphs of a
labeled graph as a function of its randomness deficiency (how far it falls
short of the maximum possible Kolmogorov complexity) and (ii) a new elementary
proof for the number of unlabeled graphs.Comment: LaTeX 9 page
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
Estimating the Algorithmic Complexity of Stock Markets
Randomness and regularities in Finance are usually treated in probabilistic
terms. In this paper, we develop a completely different approach in using a
non-probabilistic framework based on the algorithmic information theory
initially developed by Kolmogorov (1965). We present some elements of this
theory and show why it is particularly relevant to Finance, and potentially to
other sub-fields of Economics as well. We develop a generic method to estimate
the Kolmogorov complexity of numeric series. This approach is based on an
iterative "regularity erasing procedure" implemented to use lossless
compression algorithms on financial data. Examples are provided with both
simulated and real-world financial time series. The contributions of this
article are twofold. The first one is methodological : we show that some
structural regularities, invisible with classical statistical tests, can be
detected by this algorithmic method. The second one consists in illustrations
on the daily Dow-Jones Index suggesting that beyond several well-known
regularities, hidden structure may in this index remain to be identified
A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences
Given the widespread use of lossless compression algorithms to approximate
algorithmic (Kolmogorov-Chaitin) complexity, and that lossless compression
algorithms fall short at characterizing patterns other than statistical ones
not different to entropy estimations, here we explore an alternative and
complementary approach. We study formal properties of a Levin-inspired measure
calculated from the output distribution of small Turing machines. We
introduce and justify finite approximations that have been used in some
applications as an alternative to lossless compression algorithms for
approximating algorithmic (Kolmogorov-Chaitin) complexity. We provide proofs of
the relevant properties of both and and compare them to Levin's
Universal Distribution. We provide error estimations of with respect to
. Finally, we present an application to integer sequences from the Online
Encyclopedia of Integer Sequences which suggests that our AP-based measures may
characterize non-statistical patterns, and we report interesting correlations
with textual, function and program description lengths of the said sequences.Comment: As accepted by the journal Complexity (Wiley/Hindawi
Average Symmetry and Complexity of Binary Sequences
The concept of complexity as average symmetry is here formalised by
introducing a general expression dependent on the relevant symmetry and a
related discrete set of transformations. This complexity has hybrid features of
both statistical complexities and of those related to algorithmic complexity.
Like the former, random objects are not the most complex while they still are
more complex than the more symmetric ones (as in the latter). By applying this
definition to the particular case of rotations of binary sequences, we are able
to find a precise expression for it. In particular, we then analyse the
behaviour of this measure in different well-known automatic sequences, where we
find interesting new properties. A generalisation of the measure to statistical
ensembles is also presented and applied to the case of i.i.d. random sequences
and to the equilibrium configurations of the one-dimensional Ising model. In
both cases, we find that the complexity is continuous and differentiable as a
function of the relevant parameters and agrees with the intuitive requirements
we were looking for.Comment: 9 pages, 5 figure
- …