8,378 research outputs found
Training-free Measures Based on Algorithmic Probability Identify High Nucleosome Occupancy in DNA Sequences
We introduce and study a set of training-free methods of
information-theoretic and algorithmic complexity nature applied to DNA
sequences to identify their potential capabilities to determine nucleosomal
binding sites. We test our measures on well-studied genomic sequences of
different sizes drawn from different sources. The measures reveal the known in
vivo versus in vitro predictive discrepancies and uncover their potential to
pinpoint (high) nucleosome occupancy. We explore different possible signals
within and beyond the nucleosome length and find that complexity indices are
informative of nucleosome occupancy. We compare against the gold standard
(Kaplan model) and find similar and complementary results with the main
difference that our sequence complexity approach. For example, for high
occupancy, complexity-based scores outperform the Kaplan model for predicting
binding representing a significant advancement in predicting the highest
nucleosome occupancy following a training-free approach.Comment: 8 pages main text (4 figures), 12 total with Supplementary (1 figure
A rapid and scalable method for multilocus species delimitation using Bayesian model comparison and rooted triplets
Multilocus sequence data provide far greater power to resolve species limits than the single locus data typically used for broad surveys of clades. However, current statistical methods based on a multispecies coalescent framework are computationally demanding, because of the number of possible delimitations that must be compared and time-consuming likelihood calculations. New methods are therefore needed to open up the power of multilocus approaches to larger systematic surveys. Here, we present a rapid and scalable method that introduces two new innovations. First, the method reduces the complexity of likelihood calculations by decomposing the tree into rooted triplets. The distribution of topologies for a triplet across multiple loci has a uniform trinomial distribution when the 3 individuals belong to the same species, but a skewed distribution if they belong to separate species with a form that is specified by the multispecies coalescent. A Bayesian model comparison framework was developed and the best delimitation found by comparing the product of posterior probabilities of all triplets. The second innovation is a new dynamic programming algorithm for finding the optimum delimitation from all those compatible with a guide tree by successively analyzing subtrees defined by each node. This algorithm removes the need for heuristic searches used by current methods, and guarantees that the best solution is found and potentially could be used in other systematic applications. We assessed the performance of the method with simulated, published and newly generated data. Analyses of simulated data demonstrate that the combined method has favourable statistical properties and scalability with increasing sample sizes. Analyses of empirical data from both eukaryotes and prokaryotes demonstrate its potential for delimiting species in real cases
Frequency Effects on Predictability of Stock Returns
We propose that predictability is a prerequisite for profitability on
financial markets. We look at ways to measure predictability of price changes
using information theoretic approach and employ them on all historical data
available for NYSE 100 stocks. This allows us to determine whether frequency of
sampling price changes affects the predictability of those. We also relations
between price changes predictability and the deviation of the price formation
processes from iid as well as the stock's sector. We also briefly comment on
the complicated relationship between predictability of price changes and the
profitability of algorithmic trading.Comment: 8 pages, 16 figures, submitted for possible publication to
Computational Intelligence for Financial Engineering and Economics 2014
conferenc
A decision-theoretic approach for segmental classification
This paper is concerned with statistical methods for the segmental
classification of linear sequence data where the task is to segment and
classify the data according to an underlying hidden discrete state sequence.
Such analysis is commonplace in the empirical sciences including genomics,
finance and speech processing. In particular, we are interested in answering
the following question: given data and a statistical model of
the hidden states , what should we report as the prediction under
the posterior distribution ? That is, how should you make a
prediction of the underlying states? We demonstrate that traditional approaches
such as reporting the most probable state sequence or most probable set of
marginal predictions can give undesirable classification artefacts and offer
limited control over the properties of the prediction. We propose a decision
theoretic approach using a novel class of Markov loss functions and report
via the principle of minimum expected loss (maximum expected
utility). We demonstrate that the sequence of minimum expected loss under the
Markov loss function can be enumerated exactly using dynamic programming
methods and that it offers flexibility and performance improvements over
existing techniques. The result is generic and applicable to any probabilistic
model on a sequence, such as Hidden Markov models, change point or product
partition models.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS657 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …