96 research outputs found

    Increment entropy as a measure of complexity for time series

    Full text link
    Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce increment entropy to measure the complexity of time series in which each increment is mapped into a word of two letters, one letter corresponding to direction and the other corresponding to magnitude. The Shannon entropy of the words is termed as increment entropy (IncrEn). Simulations on synthetic data and tests on epileptic EEG signals have demonstrated its ability of detecting the abrupt change, regardless of energetic (e.g. spikes or bursts) or structural changes. The computation of IncrEn does not make any assumption on time series and it can be applicable to arbitrary real-world data.Comment: 12pages,7figure,2 table

    Limit complexities revisited [once more]

    Get PDF
    The main goal of this article is to put some known results in a common perspective and to simplify their proofs. We start with a simple proof of a result of Vereshchagin saying that lim supnC(xn)\limsup_n C(x|n) equals C0(x)C^{0'}(x). Then we use the same argument to prove similar results for prefix complexity, a priori probability on binary tree, to prove Conidis' theorem about limits of effectively open sets, and also to improve the results of Muchnik about limit frequencies. As a by-product, we get a criterion of 2-randomness proved by Miller: a sequence XX is 2-random if and only if there exists cc such that any prefix xx of XX is a prefix of some string yy such that C(y)ycC(y)\ge |y|-c. (In the 1960ies this property was suggested in Kolmogorov as one of possible randomness definitions.) We also get another 2-randomness criterion by Miller and Nies: XX is 2-random if and only if C(x)xcC(x)\ge |x|-c for some cc and infinitely many prefixes xx of XX. This is a modified version of our old paper that contained a weaker (and cumbersome) version of Conidis' result, and the proof used low basis theorem (in quite a strange way). The full version was formulated there as a conjecture. This conjecture was later proved by Conidis. Bruno Bauwens (personal communication) noted that the proof can be obtained also by a simple modification of our original argument, and we reproduce Bauwens' argument with his permission.Comment: See http://arxiv.org/abs/0802.2833 for the old pape

    A Quantitative Occam's Razor

    Full text link
    This paper derives an objective Bayesian "prior" based on considerations of entropy/information. By this means, it produces a quantitative measure of goodness of fit (the "H-statistic") that balances higher likelihood against the number of fitting parameters employed. The method is intended for phenomenological applications where the underlying theory is uncertain or unknown. For example, it can help decide whether the large angle anomalies in the CMB data should be taken seriously. I am therefore posting it now, even though it was published before the arxiv existed.Comment: plainTeX, 16 pages, no figures. Most current version is available at http://www.physics.syr.edu/~sorkin/some.papers/ (or wherever my home-page may be

    Entropy and perpetual computers

    Get PDF
    A definition of entropy via the Kolmogorov algorithmic complexity is discussed. As examples, we show how the meanfield theory for the Ising model, and the entropy of a perfect gas can be recovered. The connection with computations are pointed out, by paraphrasing the laws of thermodynamics for computers. Also discussed is an approach that may be adopted to develop statistical mechanics using the algorithmic point of view.Comment: Based on Chanchal Majumdar memorial lectures given in Kolkata. 9 pages, 3 eps figures. For publication in "Physics Teacher"; v2. Sec 3 fragmented into smaller subsection

    Analysis of Daily Streamflow Complexity by Kolmogorov Measures and Lyapunov Exponent

    Get PDF
    Analysis of daily streamflow variability in space and time is important for water resources planning, development, and management. The natural variability of streamflow is being complicated by anthropogenic influences and climate change, which may introduce additional complexity into the phenomenological records. To address this question for daily discharge data recorded during the period 1989-2016 at twelve gauging stations on Brazos River in Texas (USA), we use a set of novel quantitative tools: Kolmogorov complexity (KC) with its derivative associated measures to assess complexity, and Lyapunov time (LT) to assess predictability. We find that all daily discharge series exhibit long memory with an increasing downflow tendency, while the randomness of the series at individual sites cannot be definitively concluded. All Kolmogorov complexity measures have relatively small values with the exception of the USGS (United States Geological Survey) 08088610 station at Graford, Texas, which exhibits the highest values of these complexity measures. This finding may be attributed to the elevated effect of human activities at Graford, and proportionally lesser effect at other stations. In addition, complexity tends to decrease downflow, meaning that larger catchments are generally less influenced by anthropogenic activity. The correction on randomness of Lyapunov time (quantifying predictability) is found to be inversely proportional to the Kolmogorov complexity, which strengthens our conclusion regarding the effect of anthropogenic activities, considering that KC and LT are distinct measures, based on rather different techniques
    corecore