33,587 research outputs found

    Entropy rate calculations of algebraic measures

    Full text link
    Let K={0,1,...,q−1}K = \{0,1,...,q-1\}. We use a special class of translation invariant measures on KZK^\mathbb{Z} called algebraic measures to study the entropy rate of a hidden Markov processes. Under some irreducibility assumptions of the Markov transition matrix we derive exact formulas for the entropy rate of a general qq state hidden Markov process derived from a Markov source corrupted by a specific noise model. We obtain upper bounds on the error when using an approximation to the formulas and numerically compute the entropy rates of two and three state hidden Markov models

    Permutation Complexity and Coupling Measures in Hidden Markov Models

    Get PDF
    In [Haruna, T. and Nakajima, K., 2011. Physica D 240, 1370-1377], the authors introduced the duality between values (words) and orderings (permutations) as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutation analogues. It has been used to give a simple proof of the equality between the entropy rate and the permutation entropy rate for any finite-alphabet stationary stochastic process and show some results on the excess entropy and the transfer entropy for finite-alphabet stationary ergodic Markov processes. In this paper, we extend our previous results to hidden Markov models and show the equalities between various information theoretic complexity and coupling measures and their permutation analogues. In particular, we show the following two results within the realm of hidden Markov models with ergodic internal processes: the two permutation analogues of the transfer entropy, the symbolic transfer entropy and the transfer entropy on rank vectors, are both equivalent to the transfer entropy if they are considered as the rates, and the directed information theory can be captured by the permutation entropy approach.Comment: 26 page

    Derivatives of Entropy Rate in Special Families of Hidden Markov Chains

    Get PDF
    Consider a hidden Markov chain obtained as the observation process of an ordinary Markov chain corrupted by noise. Zuk, et. al. [13], [14] showed how, in principle, one can explicitly compute the derivatives of the entropy rate of at extreme values of the noise. Namely, they showed that the derivatives of standard upper approximations to the entropy rate actually stabilize at an explicit finite time. We generalize this result to a natural class of hidden Markov chains called ``Black Holes.'' We also discuss in depth special cases of binary Markov chains observed in binary symmetric noise, and give an abstract formula for the first derivative in terms of a measure on the simplex due to Blackwell.Comment: The relaxed condtions for entropy rate and examples are taken out (to be part of another paper). The section about general principle and an example to determine the domain of analyticity is taken out (to be part of another paper). A section about binary Markov chains corrupted by binary symmetric noise is adde

    Limit theorems for the sample entropy of hidden Markov chains

    Get PDF
    The Shannon-McMillan-Breiman theorem asserts that the sample entropy of a stationary and ergodic stochastic process converges to the entropy rate of the same process (as the sample size tends to infinity) almost surely. In this paper, we restrict our attention to the convergence behavior of the sample entropy of hidden Markov chains. Under certain positivity assumptions, we prove that a central limit theorem (CLT) with some Berry-Esseen bound for the sample entropy of a hidden Markov chain, and we use this CLT to establish a law of iterated logarithm (LIL) for the sample entropy. © 2011 IEEE.published_or_final_versionThe 2011 IEEE International Symposium on Information Theory (ISIT), St. Petersburg, Russia, 31 July-5 August 2011. In Proceedings of ISIT, 2011, p. 3009-301

    Concavity of Mutual Information Rate for Input-Restricted Finite-State Memoryless Channels at High SNR

    Full text link
    We consider a finite-state memoryless channel with i.i.d. channel state and the input Markov process supported on a mixing finite-type constraint. We discuss the asymptotic behavior of entropy rate of the output hidden Markov chain and deduce that the mutual information rate of such a channel is concave with respect to the parameters of the input Markov processes at high signal-to-noise ratio. In principle, the concavity result enables good numerical approximation of the maximum mutual information rate and capacity of such a channel.Comment: 26 page

    Sensor Scheduling for Optimal Observability Using Estimation Entropy

    Full text link
    We consider sensor scheduling as the optimal observability problem for partially observable Markov decision processes (POMDP). This model fits to the cases where a Markov process is observed by a single sensor which needs to be dynamically adjusted or by a set of sensors which are selected one at a time in a way that maximizes the information acquisition from the process. Similar to conventional POMDP problems, in this model the control action is based on all past measurements; however here this action is not for the control of state process, which is autonomous, but it is for influencing the measurement of that process. This POMDP is a controlled version of the hidden Markov process, and we show that its optimal observability problem can be formulated as an average cost Markov decision process (MDP) scheduling problem. In this problem, a policy is a rule for selecting sensors or adjusting the measuring device based on the measurement history. Given a policy, we can evaluate the estimation entropy for the joint state-measurement processes which inversely measures the observability of state process for that policy. Considering estimation entropy as the cost of a policy, we show that the problem of finding optimal policy is equivalent to an average cost MDP scheduling problem where the cost function is the entropy function over the belief space. This allows the application of the policy iteration algorithm for finding the policy achieving minimum estimation entropy, thus optimum observability.Comment: 5 pages, submitted to 2007 IEEE PerCom/PerSeNS conferenc

    Classical capacity of a qubit depolarizing channel with memory

    Full text link
    The classical product state capacity of a noisy quantum channel with memory is investigated. A forgetful noise-memory channel is constructed by Markov switching between two depolarizing channels which introduces non-Markovian noise correlations between successive channel uses. The computation of the capacity is reduced to an entropy computation for a function of a Markov process. A reformulation in terms of algebraic measures then enables its calculation. The effects of the hidden-Markovian memory on the capacity are explored. An increase in noise-correlations is found to increase the capacity
    • …
    corecore