18,527 research outputs found

    Permutation Complexity and Coupling Measures in Hidden Markov Models

    Get PDF
    In [Haruna, T. and Nakajima, K., 2011. Physica D 240, 1370-1377], the authors introduced the duality between values (words) and orderings (permutations) as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutation analogues. It has been used to give a simple proof of the equality between the entropy rate and the permutation entropy rate for any finite-alphabet stationary stochastic process and show some results on the excess entropy and the transfer entropy for finite-alphabet stationary ergodic Markov processes. In this paper, we extend our previous results to hidden Markov models and show the equalities between various information theoretic complexity and coupling measures and their permutation analogues. In particular, we show the following two results within the realm of hidden Markov models with ergodic internal processes: the two permutation analogues of the transfer entropy, the symbolic transfer entropy and the transfer entropy on rank vectors, are both equivalent to the transfer entropy if they are considered as the rates, and the directed information theory can be captured by the permutation entropy approach.Comment: 26 page

    Limit theorems for the sample entropy of hidden Markov chains

    Get PDF
    The Shannon-McMillan-Breiman theorem asserts that the sample entropy of a stationary and ergodic stochastic process converges to the entropy rate of the same process (as the sample size tends to infinity) almost surely. In this paper, we restrict our attention to the convergence behavior of the sample entropy of hidden Markov chains. Under certain positivity assumptions, we prove that a central limit theorem (CLT) with some Berry-Esseen bound for the sample entropy of a hidden Markov chain, and we use this CLT to establish a law of iterated logarithm (LIL) for the sample entropy. © 2011 IEEE.published_or_final_versionThe 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July-5 August 2011. In Proceedings of ISIT, 2011, p. 3009-301

    Concavity of Mutual Information Rate for Input-Restricted Finite-State Memoryless Channels at High SNR

    Full text link
    We consider a finite-state memoryless channel with i.i.d. channel state and the input Markov process supported on a mixing finite-type constraint. We discuss the asymptotic behavior of entropy rate of the output hidden Markov chain and deduce that the mutual information rate of such a channel is concave with respect to the parameters of the input Markov processes at high signal-to-noise ratio. In principle, the concavity result enables good numerical approximation of the maximum mutual information rate and capacity of such a channel.Comment: 26 page

    Clickstream Data Analysis: A Clustering Approach Based on Mixture Hidden Markov Models

    Get PDF
    Nowadays, the availability of devices such as laptops and cell phones enables one to browse the web at any time and place. As a consequence, a company needs to have a website so as to maintain or increase customer loyalty and reach potential new customers. Besides, acting as a virtual point-of-sale, the company portal allows it to obtain insights on potential customers through clickstream data, web generated data that track users accesses and activities in websites. However, these data are not easy to handle as they are complex, unstructured and limited by lack of clear information about user intentions and goals. Clickstream data analysis is a suitable tool for managing the complexity of these datasets, obtaining a cleaned and processed sequential dataframe ready to identify and analyse patterns. Analysing clickstream data is important for companies as it enables them to under stand differences in web user behaviour while they explore websites, how they move from one page to another and what they select in order to define business strategies tar geting specific types of potential costumers. To obtain this level of insight it is pivotal to understand how to exploit hidden information related to clickstream data. This work presents the cleaning and pre-processing procedures for clickstream data which are needed to get a structured sequential dataset and analyses these sequences by the application of Mixture of discrete time Hidden Markov Models (MHMMs), a statisti cal tool suitable for clickstream data analysis and profile identification that has not been widely used in this context. Specifically, hidden Markov process accounts for a time varying latent variable to handle uncertainty and groups together observed states based on unknown similarity and entails identifying both the number of mixture components re lating to the subpopulations as well as the number of latent states for each latent Markov chain. However, the application of MHMMs requires the identification of both the number of components and states. Information Criteria (IC) are generally used for model selection in mixture hidden Markov models and, although their performance has been widely studied for mixture models and hidden Markov models, they have received little attention in the MHMM context. The most widely used criterion is BIC even if its performance for these models depends on factors such as the number of components and sequence length. Another class of model selection criteria is the Classification Criteria (CC). They were defined specifically for clustering purposes and rely on an entropy measure to account for separability between groups. These criteria are clearly the best option for our purpose, but their application as model selection tools for MHMMs requires the definition of a suitable entropy measure. In the light of these considerations, this work proposes a classification criterion based on an integrated classification likelihood approach for MHMMs that accounts for the two latent classes in the model: the subpopulations and the hidden states. This criterion is a modified ICL BIC, a classification criterion that was originally defined in the mixture model context and used in hidden Markov models. ICL BIC is a suitable score to identify the number of classes (components or states) and, thus, to extend it to MHMMs we de fined a joint entropy accounting for both a component-related entropy and a state-related conditional entropy. The thesis presents a Monte Carlo simulation study to compare selection criteria per formance, the results of which point out the limitations of the most commonly used infor mation criteria and demonstrate that the proposed criterion outperforms them in identify ing components and states, especially in short length sequences which are quite common in website accesses. The proposed selection criterion was applied to real clickstream data collected from the website of a Sicilian company operating in the hospitality sector. Data was modelled by an MHMM identifying clusters related to the browsing behaviour of web users which provided essential indications for developing new business strategies. This thesis is structured as follows: after an introduction on the main topics in Chapter 1, we present the clickstream data and their cleaning and pre-processing steps in Chapter 2; Chapter 3 illustrates the structure and estimation algorithms of mixture hidden Markov models; Chapter 4 presents a review of model selection criteria and the definition of the proposed ICL BIC for MHMMs; the real clickstream data analysis follows in Chapter 5
    corecore