4,928 research outputs found
Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures
We show that real-value approximations of Kolmogorov-Chaitin complexity K(s) using the algorithmic coding theorem, as calculated from the output frequency of a large set of small deterministic Turing machines with up to 5 states (and 2 symbols), is consistent with the number of instructions used by the Turing machines producing s, which in turn is consistent with strict integer-value program-size complexity (based on our knowledge of the smallest machine in terms of the number of instructions used). We also show that neither K(s) nor the number of instructions used manifests any correlation with Bennett's Logical Depth LD(s), other than what's predicted by the theory (shallow and non-random strings have low complexity under both measures). The agreement between the theory and the numerical calculations shows that despite the undecidability of these theoretical measures, the rate of convergence of approximations is stable enough to devise some applications. We announce a Beta version of an Online Algorithmic Complexity Calculator (OACC) implementing these methods
A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences
Given the widespread use of lossless compression algorithms to approximate
algorithmic (Kolmogorov-Chaitin) complexity, and that lossless compression
algorithms fall short at characterizing patterns other than statistical ones
not different to entropy estimations, here we explore an alternative and
complementary approach. We study formal properties of a Levin-inspired measure
calculated from the output distribution of small Turing machines. We
introduce and justify finite approximations that have been used in some
applications as an alternative to lossless compression algorithms for
approximating algorithmic (Kolmogorov-Chaitin) complexity. We provide proofs of
the relevant properties of both and and compare them to Levin's
Universal Distribution. We provide error estimations of with respect to
. Finally, we present an application to integer sequences from the Online
Encyclopedia of Integer Sequences which suggests that our AP-based measures may
characterize non-statistical patterns, and we report interesting correlations
with textual, function and program description lengths of the said sequences.Comment: As accepted by the journal Complexity (Wiley/Hindawi
Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results
We apply methods for estimating the algorithmic complexity of sequences to
behavioural sequences of three landmark studies of animal behavior each of
increasing sophistication, including foraging communication by ants, flight
patterns of fruit flies, and tactical deception and competition strategies in
rodents. In each case, we demonstrate that approximations of Logical Depth and
Kolmogorv-Chaitin complexity capture and validate previously reported results,
in contrast to other measures such as Shannon Entropy, compression or ad hoc.
Our method is practically useful when dealing with short sequences, such as
those often encountered in cognitive-behavioural research. Our analysis
supports and reveals non-random behavior (LD and K complexity) in flies even in
the absence of external stimuli, and confirms the "stochastic" behaviour of
transgenic rats when faced that they cannot defeat by counter prediction. The
method constitutes a formal approach for testing hypotheses about the
mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table
Structure emerges faster during cultural transmission in children than in adults
How does children’s limited processing capacity affect cultural transmission of complex information? We show that over the course of iterated reproduction of two-dimensional random dot patterns transmission accuracy increased to a similar extent in 5- to 8-year-old children and adults whereas algorithmic complexity decreased faster in children. Thus, children require more structure to render complex inputs learnable. In line with the Less-Is-More hypothesis, we interpret this as evidence that children’s processing limitations affecting working memory capacity and executive control constrain the ability to represent and generate complexity, which, in turn, facilitates emergence of structure. This underscores the importance of investigating the role of children in the transmission of complex cultural traits
Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability
Previously referred to as `miraculous' in the scientific literature because
of its powerful properties and its wide application as optimal solution to the
problem of induction/inference, (approximations to) Algorithmic Probability
(AP) and the associated Universal Distribution are (or should be) of the
greatest importance in science. Here we investigate the emergence, the rates of
emergence and convergence, and the Coding-theorem like behaviour of AP in
Turing-subuniversal models of computation. We investigate empirical
distributions of computing models in the Chomsky hierarchy. We introduce
measures of algorithmic probability and algorithmic complexity based upon
resource-bounded computation, in contrast to previously thoroughly investigated
distributions produced from the output distribution of Turing machines. This
approach allows for numerical approximations to algorithmic
(Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a
computational hierarchy. We demonstrate that all these estimations are
correlated in rank and that they converge both in rank and values as a function
of computational power, despite fundamental differences between computational
models. In the context of natural processes that operate below the Turing
universal level because of finite resources and physical degradation, the
investigation of natural biases stemming from algorithmic rules may shed light
on the distribution of outcomes. We show that up to 60\% of the
simplicity/complexity bias in distributions produced even by the weakest of the
computational models can be accounted for by Algorithmic Probability in its
approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity
calculator: http://complexitycalculator.com
On the Complexity and Behaviour of Cryptocurrencies Compared to Other Markets
We show that the behaviour of Bitcoin has interesting similarities to stock
and precious metal markets, such as gold and silver. We report that whilst
Litecoin, the second largest cryptocurrency, closely follows Bitcoin's
behaviour, it does not show all the reported properties of Bitcoin. Agreements
between apparently disparate complexity measures have been found, and it is
shown that statistical, information-theoretic, algorithmic and fractal measures
have different but interesting capabilities of clustering families of markets
by type. The report is particularly interesting because of the range and novel
use of some measures of complexity to characterize price behaviour, because of
the IRS designation of Bitcoin as an investment property and not a currency,
and the announcement of the Canadian government's own electronic currency
MintChip.Comment: 16 pages, 11 figures, 4 table
Natural scene statistics mediate the perception of image complexity
Humans are sensitive to complexity and regularity in patterns. The subjective
perception of pattern complexity is correlated to algorithmic
(Kolmogorov-Chaitin) complexity as defined in computer science, but also to the
frequency of naturally occurring patterns. However, the possible mediational
role of natural frequencies in the perception of algorithmic complexity remains
unclear. Here we reanalyze Hsu et al. (2010) through a mediational analysis,
and complement their results in a new experiment. We conclude that human
perception of complexity seems partly shaped by natural scenes statistics,
thereby establishing a link between the perception of complexity and the effect
of natural scene statistics
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the forefront.
This framework, which we call Optimal Uncertainty Quantification (OUQ), is based
on the observation that, given a set of assumptions and information about the problem,
there exist optimal bounds on uncertainties: these are obtained as extreme
values of well-defined optimization problems corresponding to extremizing probabilities
of failure, or of deviations, subject to the constraints imposed by the scenarios
compatible with the assumptions and information. In particular, this framework
does not implicitly impose inappropriate assumptions, nor does it repudiate relevant
information.
Although OUQ optimization problems are extremely large, we show that under
general conditions, they have finite-dimensional reductions. As an application,
we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid
type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results
show that uncertainties in input parameters do not necessarily propagate to
output uncertainties.
In addition, a general algorithmic framework is developed for OUQ and is tested
on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility
of the framework for important complex systems
- …