2,554 research outputs found

    Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results

    Full text link
    We apply methods for estimating the algorithmic complexity of sequences to behavioural sequences of three landmark studies of animal behavior each of increasing sophistication, including foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition strategies in rodents. In each case, we demonstrate that approximations of Logical Depth and Kolmogorv-Chaitin complexity capture and validate previously reported results, in contrast to other measures such as Shannon Entropy, compression or ad hoc. Our method is practically useful when dealing with short sequences, such as those often encountered in cognitive-behavioural research. Our analysis supports and reveals non-random behavior (LD and K complexity) in flies even in the absence of external stimuli, and confirms the "stochastic" behaviour of transgenic rats when faced that they cannot defeat by counter prediction. The method constitutes a formal approach for testing hypotheses about the mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table

    Algorithmic Complexity for Short Binary Strings Applied to Psychology: A Primer

    Full text link
    Since human randomness production has been studied and widely used to assess executive functions (especially inhibition), many measures have been suggested to assess the degree to which a sequence is random-like. However, each of them focuses on one feature of randomness, leading authors to have to use multiple measures. Here we describe and advocate for the use of the accepted universal measure for randomness based on algorithmic complexity, by means of a novel previously presented technique using the the definition of algorithmic probability. A re-analysis of the classical Radio Zenith data in the light of the proposed measure and methodology is provided as a study case of an application.Comment: To appear in Behavior Research Method

    Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures

    Get PDF
    We show that real-value approximations of Kolmogorov-Chaitin complexity K(s) using the algorithmic coding theorem, as calculated from the output frequency of a large set of small deterministic Turing machines with up to 5 states (and 2 symbols), is consistent with the number of instructions used by the Turing machines producing s, which in turn is consistent with strict integer-value program-size complexity (based on our knowledge of the smallest machine in terms of the number of instructions used). We also show that neither K(s) nor the number of instructions used manifests any correlation with Bennett's Logical Depth LD(s), other than what's predicted by the theory (shallow and non-random strings have low complexity under both measures). The agreement between the theory and the numerical calculations shows that despite the undecidability of these theoretical measures, the rate of convergence of approximations is stable enough to devise some applications. We announce a Beta version of an Online Algorithmic Complexity Calculator (OACC) implementing these methods

    Scheduling with Predictions and the Price of Misprediction

    Get PDF
    In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information

    The snowball effect of customer slowdown in critical many-server systems

    Get PDF
    Customer slowdown describes the phenomenon that a customer's service requirement increases with experienced delay. In healthcare settings, there is substantial empirical evidence for slowdown, particularly when a patient's delay exceeds a certain threshold. For such threshold slowdown situations, we design and analyze a many-server system that leads to a two-dimensional Markov process. Analysis of this system leads to insights into the potentially detrimental effects of slowdown, especially in heavy-traffic conditions. We quantify the consequences of underprovisioning due to neglecting slowdown, demonstrate the presence of a subtle bistable system behavior, and discuss in detail the snowball effect: A delayed customer has an increased service requirement, causing longer delays for other customers, who in turn due to slowdown might require longer service times.Comment: 23 pages, 8 figures -- version 3 fixes a typo in an equation. in Stochastic Models, 201

    A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity

    Get PDF
    We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.Swedish Research Council (Vetenskapsrådet

    A decomposition method for global evaluation of Shannon entropy and local estimations of algorithmic complexity

    Get PDF
    We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff–Levin’s theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages—Wolfram Language (Mathematica), Matlab, R, Perl, Python, Pascal, C++, and Haskell—and an online algorithmic complexity calculator.Swedish Research Council (Vetenskapsrådet
    • …
    corecore