4,514 research outputs found
Algorithmic complexity for psychology: A user-friendly implementation of the coding theorem method
Kolmogorov-Chaitin complexity has long been believed to be impossible to
approximate when it comes to short sequences (e.g. of length 5-50). However,
with the newly developed \emph{coding theorem method} the complexity of strings
of length 2-11 can now be numerically estimated. We present the theoretical
basis of algorithmic complexity for short strings (ACSS) and describe an
R-package providing functions based on ACSS that will cover psychologists'
needs and improve upon previous methods in three ways: (1) ACSS is now
available not only for binary strings, but for strings based on up to 9
different symbols, (2) ACSS no longer requires time-consuming computing, and
(3) a new approach based on ACSS gives access to an estimation of the
complexity of strings of any length. Finally, three illustrative examples show
how these tools can be applied to psychology.Comment: to appear in "Behavioral Research Methods", 14 pages in journal
format, R package at http://cran.r-project.org/web/packages/acss/index.htm
Information analysis of DNA sequences
The problem of differentiating the informational content of coding (exons) and non-coding (introns) regions of a DNA sequence is one of the central problems of genomics. The introns are estimated to be nearly 95% of the DNA and since they do not seem to participate in the process of transcription of amino-acids, they have been termed “junk DNA.” Although it is believed that the non-coding regions in genomes have no role in cell growth and evolution, demonstration that these regions carry useful information would tend to falsify this belief. In this thesis, we consider entropy as a measure of information by modifying the entropy expression to take into account the varying length of these sequences. Exons are usually much shorter in length than introns; therefore the comparison of the entropy values needs to be normalized. A length correction strategy was employed using randomly generated nucleonic base strings built out of the alphabet of the same size as the exons under question. The distance between exons and introns is calculated based on their probability distributions. We found that Zipf’s distribution was not followed by the n-tuples in DNA sequences, and a newly modified power distribution derived from the Zipf’s distribution was found by trial and error that closely modeled the codon frequencies. Correlation and divergence tests were performed. Our analysis shows that introns carry nearly as much of information as exons, disproving the notion that they do not carry any information. The entropy findings of this thesis are likely to be of use in further study of other challenging works like the analysis of symmetry models of the genetic code
Recommended from our members
High throughput sequencing analysis of RNA libraries reveals the influences of initial library and PCR methods on SELEX efficiency.
The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct "biased sequences" and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the "biased sequences" was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy
Big data and the SP theory of intelligence
This article is about how the "SP theory of intelligence" and its realisation
in the "SP machine" may, with advantage, be applied to the management and
analysis of big data. The SP system -- introduced in the article and fully
described elsewhere -- may help to overcome the problem of variety in big data:
it has potential as "a universal framework for the representation and
processing of diverse kinds of knowledge" (UFK), helping to reduce the
diversity of formalisms and formats for knowledge and the different ways in
which they are processed. It has strengths in the unsupervised learning or
discovery of structure in data, in pattern recognition, in the parsing and
production of natural language, in several kinds of reasoning, and more. It
lends itself to the analysis of streaming data, helping to overcome the problem
of velocity in big data. Central in the workings of the system is lossless
compression of information: making big data smaller and reducing problems of
storage and management. There is potential for substantial economies in the
transmission of data, for big cuts in the use of energy in computing, for
faster processing, and for smaller and lighter computers. The system provides a
handle on the problem of veracity in big data, with potential to assist in the
management of errors and uncertainties in data. It lends itself to the
visualisation of knowledge structures and inferential processes. A
high-parallel, open-source version of the SP machine would provide a means for
researchers everywhere to explore what can be done with the system and to
create new versions of it.Comment: Accepted for publication in IEEE Acces
Mental evolution: a review of Daniel Dennett’s From Bacteria to Bach and Back
From Bacteria To Bach and Back is an ambitious book that attempts to integrate a theory about the evolution of the human mind with another theory about the evolution of human culture. It is advertised as a defense of memes, but conceptualizes memes more liberally than has been done before. It is also advertised as a defense of the proposal that natural selection operates on culture, but conceptualizes natural selection as a process in which nearly all interesting parameters are free to vary. This liberal conception of key concepts creates space for philosophical innovation, but occasionally makes the empirical content of the theory difficult to pin down. Nevertheless, the book is full of scientific insight, wit, and humor. It will undoubtedly become a cause of both controversy and inspiration for those interested in naturalistic theories of human culture
- …