32,288 research outputs found

    Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results

    Full text link
    We apply methods for estimating the algorithmic complexity of sequences to behavioural sequences of three landmark studies of animal behavior each of increasing sophistication, including foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition strategies in rodents. In each case, we demonstrate that approximations of Logical Depth and Kolmogorv-Chaitin complexity capture and validate previously reported results, in contrast to other measures such as Shannon Entropy, compression or ad hoc. Our method is practically useful when dealing with short sequences, such as those often encountered in cognitive-behavioural research. Our analysis supports and reveals non-random behavior (LD and K complexity) in flies even in the absence of external stimuli, and confirms the "stochastic" behaviour of transgenic rats when faced that they cannot defeat by counter prediction. The method constitutes a formal approach for testing hypotheses about the mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table

    Sequence alignment, mutual information, and dissimilarity measures for constructing phylogenies

    Get PDF
    Existing sequence alignment algorithms use heuristic scoring schemes which cannot be used as objective distance metrics. Therefore one relies on measures like the p- or log-det distances, or makes explicit, and often simplistic, assumptions about sequence evolution. Information theory provides an alternative, in the form of mutual information (MI) which is, in principle, an objective and model independent similarity measure. MI can be estimated by concatenating and zipping sequences, yielding thereby the "normalized compression distance". So far this has produced promising results, but with uncontrolled errors. We describe a simple approach to get robust estimates of MI from global pairwise alignments. Using standard alignment algorithms, this gives for animal mitochondrial DNA estimates that are strikingly close to estimates obtained from the alignment free methods mentioned above. Our main result uses algorithmic (Kolmogorov) information theory, but we show that similar results can also be obtained from Shannon theory. Due to the fact that it is not additive, normalized compression distance is not an optimal metric for phylogenetics, but we propose a simple modification that overcomes the issue of additivity. We test several versions of our MI based distance measures on a large number of randomly chosen quartets and demonstrate that they all perform better than traditional measures like the Kimura or log-det (resp. paralinear) distances. Even a simplified version based on single letter Shannon entropies, which can be easily incorporated in existing software packages, gave superior results throughout the entire animal kingdom. But we see the main virtue of our approach in a more general way. For example, it can also help to judge the relative merits of different alignment algorithms, by estimating the significance of specific alignments.Comment: 19 pages + 16 pages of supplementary materia

    Information profiles for DNA pattern discovery

    Full text link
    Finite-context modeling is a powerful tool for compressing and hence for representing DNA sequences. We describe an algorithm to detect genomic regularities, within a blind discovery strategy. The algorithm uses information profiles built using suitable combinations of finite-context models. We used the genome of the fission yeast Schizosaccharomyces pombe strain 972 h- for illustration, unveilling locations of low information content, which are usually associated with DNA regions of potential biological interest.Comment: Full version of DCC 2014 paper "Information profiles for DNA pattern discovery
    • …
    corecore