34,104 research outputs found

    Kolmogorov complexity and probability measures

    Get PDF
    summary:Classes of strings (infinite sequences resp.) with a specific flow of Kolmogorov complexity are introduced. Namely, lower bounds of Kolmogorov complexity are prescribed to strings (initial segments of infinite sequences resp.) of specified lengths. Dependence of probabilities of the classes on lower bounds of Kolmogorov complexity is the main theme of the paper. Conditions are found under which the probabilities of the classes of the strings are close to one. Similarly, conditions are derived under which the probabilities of the classes of the sequences equal one. It is shown that there are lower bounds of Kolmogorov complexity such that the studied classes of the strings are of probability close to one, classes of the sequences are of probability one, both with respect to almost all probability measures used in practice. A variant of theorem on infinite oscillations is derived

    Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks

    Get PDF
    We show that numerical approximations of Kolmogorov complexity (K) applied to graph adjacency matrices capture some group-theoretic and topological properties of graphs and empirical networks ranging from metabolic to social networks. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We show that approximations of K characterise synthetic and natural networks by their generating mechanisms, assigning lower algorithmic randomness to complex network models (Watts-Strogatz and Barabasi-Albert networks) and high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block Decomposition Method (BDM) measure, based on algorithmic probability theory.Comment: 15 2-column pages, 20 figures. Forthcoming in Physica A: Statistical Mechanics and its Application

    Kolmogorov complexity

    No full text
    The term "complexity" has different meanings in different contexts. Computational complexity measures how much time or space is needed to perform some computational task. On the other hand, the complexity of description (called also Kolmogorov complexity) is the minimal number of information bits needed to define (describe) a given object. It may well happen that a short description requires a lot of time and space to follow it and actually construct the described object. However, when speaking about Kolmogorov complexity, we usually ignore this problem and count only the description bits. As it was common to him, Kolmogorov published, in 1965, a short note that started a new line of research. Aside from the formal definition of complexity, he has also suggested to use this notion in the foundations of probability theory

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com

    Algorithmic Identification of Probabilities

    Full text link
    TThe problem is to identify a probability associated with a set of natural numbers, given an infinite data sequence of elements from the set. If the given sequence is drawn i.i.d. and the probability mass function involved (the target) belongs to a computably enumerable (c.e.) or co-computably enumerable (co-c.e.) set of computable probability mass functions, then there is an algorithm to almost surely identify the target in the limit. The technical tool is the strong law of large numbers. If the set is finite and the elements of the sequence are dependent while the sequence is typical in the sense of Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of computable measures, then there is an algorithm to identify in the limit a computable measure for which the sequence is typical (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We give the algorithms and consider the associated predictions.Comment: 19 pages LaTeX.Corrected errors and rewrote the entire paper. arXiv admin note: text overlap with arXiv:1208.500

    Measuring sets in infinite groups

    Full text link
    We are now witnessing a rapid growth of a new part of group theory which has become known as "statistical group theory". A typical result in this area would say something like ``a random element (or a tuple of elements) of a group G has a property P with probability p". The validity of a statement like that does, of course, heavily depend on how one defines probability on groups, or, equivalently, how one measures sets in a group (in particular, in a free group). We hope that new approaches to defining probabilities on groups outlined in this paper create, among other things, an appropriate framework for the study of the "average case" complexity of algorithms on groups.Comment: 22 page

    Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results

    Full text link
    We apply methods for estimating the algorithmic complexity of sequences to behavioural sequences of three landmark studies of animal behavior each of increasing sophistication, including foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition strategies in rodents. In each case, we demonstrate that approximations of Logical Depth and Kolmogorv-Chaitin complexity capture and validate previously reported results, in contrast to other measures such as Shannon Entropy, compression or ad hoc. Our method is practically useful when dealing with short sequences, such as those often encountered in cognitive-behavioural research. Our analysis supports and reveals non-random behavior (LD and K complexity) in flies even in the absence of external stimuli, and confirms the "stochastic" behaviour of transgenic rats when faced that they cannot defeat by counter prediction. The method constitutes a formal approach for testing hypotheses about the mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table
    corecore