19 research outputs found
m-sophistication
The m-sophistication of a finite binary string x is introduced as a
generalization of some parameter in the proof that complexity of complexity is
rare. A probabilistic near sufficient statistic of x is given which length is
upper bounded by the m-sophistication of x within small additive terms. This
shows that m-sophistication is lower bounded by coarse sophistication and upper
bounded by sophistication within small additive terms. It is also shown that
m-sophistication and coarse sophistication can not be approximated by an upper
or lower semicomputable function, not even within very large error.Comment: 13 pages, draf
Effective complexity of stationary process realizations
The concept of effective complexity of an object as the minimal description
length of its regularities has been initiated by Gell-Mann and Lloyd. The
regularities are modeled by means of ensembles, that is probability
distributions on finite binary strings. In our previous paper we propose a
definition of effective complexity in precise terms of algorithmic information
theory. Here we investigate the effective complexity of binary strings
generated by stationary, in general not computable, processes. We show that
under not too strong conditions long typical process realizations are
effectively simple. Our results become most transparent in the context of
coarse effective complexity which is a modification of the original notion of
effective complexity that uses less parameters in its definition. A similar
modification of the related concept of sophistication has been suggested by
Antunes and Fortnow.Comment: 14 pages, no figure
Algorithmic Statistics
While Kolmogorov complexity is the accepted absolute measure of information
content of an individual finite object, a similarly absolute notion is needed
for the relation between an individual data sample and an individual model
summarizing the information in the data, for example, a finite set (or
probability distribution) where the data sample typically came from. The
statistical theory based on such relations between individual objects can be
called algorithmic statistics, in contrast to classical statistical theory that
deals with relations between probabilistic ensembles. We develop the
algorithmic theory of statistic, sufficient statistic, and minimal sufficient
statistic. This theory is based on two-part codes consisting of the code for
the statistic (the model summarizing the regularity, the meaningful
information, in the data) and the model-to-data code. In contrast to the
situation in probabilistic statistical theory, the algorithmic relation of
(minimal) sufficiency is an absolute relation between the individual model and
the individual data sample. We distinguish implicit and explicit descriptions
of the models. We give characterizations of algorithmic (Kolmogorov) minimal
sufficient statistic for all data samples for both description modes--in the
explicit mode under some constraints. We also strengthen and elaborate earlier
results on the ``Kolmogorov structure function'' and ``absolutely
non-stochastic objects''--those rare objects for which the simplest models that
summarize their relevant information (minimal sufficient statistics) are at
least as complex as the objects themselves. We demonstrate a close relation
between the probabilistic notions and the algorithmic ones.Comment: LaTeX, 22 pages, 1 figure, with correction to the published journal
versio
Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton
In contrast to entropy, which increases monotonically, the "complexity" or
"interestingness" of closed systems seems intuitively to increase at first and
then decrease as equilibrium is approached. For example, our universe lacked
complex structures at the Big Bang and will also lack them after black holes
evaporate and particles are dispersed. This paper makes an initial attempt to
quantify this pattern. As a model system, we use a simple, two-dimensional
cellular automaton that simulates the mixing of two liquids ("coffee" and
"cream"). A plausible complexity measure is then the Kolmogorov complexity of a
coarse-grained approximation of the automaton's state, which we dub the
"apparent complexity." We study this complexity measure, and show analytically
that it never becomes large when the liquid particles are non-interacting. By
contrast, when the particles do interact, we give numerical evidence that the
complexity reaches a maximum comparable to the "coffee cup's" horizontal
dimension. We raise the problem of proving this behavior analytically
Kolmogorov Last Discovery? (Kolmogorov and Algorithmic Statictics)
The last theme of Kolmogorov's mathematics research was algorithmic theory of
information, now often called Kolmogorov complexity theory. There are only two
main publications of Kolmogorov (1965 and 1968-1969) on this topic. So
Kolmogorov's ideas that did not appear as proven (and published) theorems can
be reconstructed only partially based on work of his students and
collaborators, short abstracts of his talks and the recollections of people who
were present at these talks.
In this survey we try to reconstruct the development of Kolmogorov's ideas
related to algorithmic statistics (resource-bounded complexity, structure
function and stochastic objects).Comment: [version 2: typos and minor errors corrected
The Value of Existence Beyond Life: Towards a More Versatile Environmental Ethics
This paper argues that those that subscribe to “Biocentrism”, specifically the Biocentrism argued for by Paul Taylor, ought to adopt “Ontocentrism” instead. Biocentrism, the theory that all and only living things are morally considerable, fails to account for important moral differences between living things. It cannot justify, without ad-hoc addition, the intuition that a man is worth more than a pig, and a pig is worth more than a mouse. It similarly fails to account for the status of larger systems such as ecosystems, and lastly it fails to account for the status of non-biological entities and artificial life. Ontocentrism, the theory that all existing things, broadly construed, are morally considerable, ought to be adopted because it can account for these things without being ad-hoc and arbitrary
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations.
From the philosophical point of view, it tries to formalize how the statistics
works and why some statistical models are better than others. After this notion
of a "good model" is introduced, a natural question arises: it is possible that
for some piece of data there is no good model? If yes, how often these bad
("non-stochastic") data appear "in real life"?
Another, more technical motivation comes from algorithmic information theory.
In this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some
number, called its algorithmic complexity (or Kolmogorov complexity).
Algorithmic statistic provides a more fine-grained classification: for each
finite object some curve is defined that characterizes its behavior. It turns
out that several different definitions give (approximately) the same curve.
In this survey we try to provide an exposition of the main results in the
field (including full proofs for the most important ones), as well as some
historical comments. We assume that the reader is familiar with the main
notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde