16 research outputs found

    Frequency Effects on Predictability of Stock Returns

    Full text link
    We propose that predictability is a prerequisite for profitability on financial markets. We look at ways to measure predictability of price changes using information theoretic approach and employ them on all historical data available for NYSE 100 stocks. This allows us to determine whether frequency of sampling price changes affects the predictability of those. We also relations between price changes predictability and the deviation of the price formation processes from iid as well as the stock's sector. We also briefly comment on the complicated relationship between predictability of price changes and the profitability of algorithmic trading.Comment: 8 pages, 16 figures, submitted for possible publication to Computational Intelligence for Financial Engineering and Economics 2014 conferenc

    Maximum Entropy Production Principle for Stock Returns

    Full text link
    In our previous studies we have investigated the structural complexity of time series describing stock returns on New York's and Warsaw's stock exchanges, by employing two estimators of Shannon's entropy rate based on Lempel-Ziv and Context Tree Weighting algorithms, which were originally used for data compression. Such structural complexity of the time series describing logarithmic stock returns can be used as a measure of the inherent (model-free) predictability of the underlying price formation processes, testing the Efficient-Market Hypothesis in practice. We have also correlated the estimated predictability with the profitability of standard trading algorithms, and found that these do not use the structure inherent in the stock returns to any significant degree. To find a way to use the structural complexity of the stock returns for the purpose of predictions we propose the Maximum Entropy Production Principle as applied to stock returns, and test it on the two mentioned markets, inquiring into whether it is possible to enhance prediction of stock returns based on the structural complexity of these and the mentioned principle.Comment: 14 pages, 5 figure

    Adaptive compression against a countable alphabet

    Get PDF
    International audienceThis paper sheds light on universal coding with respect to classes of memoryless sources over a countable alphabet defined by an envelope function with finite and non-decreasing hazard rate. We prove that the auto-censuring (AC) code introduced by Bontemps (2011) is adaptive with respect to the collection of such classes. The analysis builds on the tight characterization of universal redundancy rate in terms of metric entropy by Haussler and Opper (1997) and on a careful analysis of the performance of the AC-coding algorithm. The latter relies on non-asymptotic bounds for maxima of samples from discrete distributions with finite and non-decreasing hazard rate

    Universal lossless source coding with the Burrows Wheeler transform

    Get PDF
    The Burrows Wheeler transform (1994) is a reversible sequence transformation used in a variety of practical lossless source-coding algorithms. In each, the BWT is followed by a lossless source code that attempts to exploit the natural ordering of the BWT coefficients. BWT-based compression schemes are widely touted as low-complexity algorithms giving lossless coding rates better than those of the Ziv-Lempel codes (commonly known as LZ'77 and LZ'78) and almost as good as those achieved by prediction by partial matching (PPM) algorithms. To date, the coding performance claims have been made primarily on the basis of experimental results. This work gives a theoretical evaluation of BWT-based coding. The main results of this theoretical evaluation include: (1) statistical characterizations of the BWT output on both finite strings and sequences of length n → ∞, (2) a variety of very simple new techniques for BWT-based lossless source coding, and (3) proofs of the universality and bounds on the rates of convergence of both new and existing BWT-based codes for finite-memory and stationary ergodic sources. The end result is a theoretical justification and validation of the experimentally derived conclusions: BWT-based lossless source codes achieve universal lossless coding performance that converges to the optimal coding performance more quickly than the rate of convergence observed in Ziv-Lempel style codes and, for some BWT-based codes, within a constant factor of the optimal rate of convergence for finite-memory source

    Universal lossless source coding with the Burrows Wheeler transform

    Get PDF
    We here consider a theoretical evaluation of data compression algorithms based on the Burrows Wheeler transform (BWT). The main contributions include a variety of very simple new techniques for BWT-based universal lossless source coding on finite-memory sources and a set of new rate of convergence results for BWT-based source codes. The result is a theoretical validation and quantification of the earlier experimental observation that BWT-based lossless source codes give performance better than that of Ziv-Lempel-style codes and almost as good as that of prediction by partial mapping (PPM) algorithms

    Distribution des symboles finaux dans un arbre de recherche avec des sources de Markov

    Get PDF
    Lempel-Ziv'78 is one of the most popular data compression algorithm on words. Over the last decades we uncover its fascinating behavior and understand better many of its beautiful properties. Among others, in 1995 by settling the Ziv conjecture we proved that for memoryless source (i.e., when a sequence is generated by a source without memory) the number of LZ'78 phrases satisfies the Central Limit Theorem (CLT). Since then the quest commenced to extend it to Markov sources, however, despite several attempts this problem is still open. In this conference paper, we revisit the issue and focus on a much simpler, but not trivial problem that may lead to the resolution of the LZ'78 dilemma. We consider the associated Digital Search Tree (DST) version of the problem in which the DST is built over a fixed number of Markov generated sequences. In such a model we shall count the number of of the so called "tail symbol", that is, the symbol that follows the last inserted symbol. Our goal here is to analyze this new quantity under Markovian assumption since it plays crucial role in the analysis of the original LZ'78 problem. We establish the mean, the variance, and the central limit theorem for the number of tail symbols. We accomplish it by applying techniques of analytic combinatorics on words also known as analytic pattern matching

    Adaptive compression against a countable alphabet

    Get PDF
    This paper sheds light on universal coding with respect to classes of memoryless sources over a countable alphabet defined by an envelope function with finite and non-decreasing hazard rate. We prove that the auto-censuring (AC) code introduced by Bontemps (2011) is adaptive with respect to the collection of such classes. The analysis builds on the tight characterization of universal redundancy rate in terms of metric entropy by Haussler and Opper (1997) and on a careful analysis of the performance of the AC-coding algorithm. The latter relies on non-asymptotic bounds for maxima of samples from discrete distributions with finite and non-decreasing hazard rate

    Universal lossless source coding with the Burrows Wheeler transform

    Full text link
    corecore