15,901 research outputs found

    Entropy rate calculations of algebraic measures

    Full text link
    Let K={0,1,...,q1}K = \{0,1,...,q-1\}. We use a special class of translation invariant measures on KZK^\mathbb{Z} called algebraic measures to study the entropy rate of a hidden Markov processes. Under some irreducibility assumptions of the Markov transition matrix we derive exact formulas for the entropy rate of a general qq state hidden Markov process derived from a Markov source corrupted by a specific noise model. We obtain upper bounds on the error when using an approximation to the formulas and numerically compute the entropy rates of two and three state hidden Markov models

    Taylor series expansions for the entropy rate of Hidden Markov Processes

    Full text link
    Finding the entropy rate of Hidden Markov Processes is an active research topic, of both theoretical and practical importance. A recently used approach is studying the asymptotic behavior of the entropy rate in various regimes. In this paper we generalize and prove a previous conjecture relating the entropy rate to entropies of finite systems. Building on our new theorems, we establish series expansions for the entropy rate in two different regimes. We also study the radius of convergence of the two series expansions

    Spectral Simplicity of Apparent Complexity, Part I: The Nondiagonalizable Metadynamics of Prediction

    Full text link
    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question---correlation, predictability, predictive cost, observer synchronization, and the like---induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II, to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.Comment: 24 pages, 3 figures, 4 tables; current version always at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt1.ht

    Sensor Scheduling for Optimal Observability Using Estimation Entropy

    Full text link
    We consider sensor scheduling as the optimal observability problem for partially observable Markov decision processes (POMDP). This model fits to the cases where a Markov process is observed by a single sensor which needs to be dynamically adjusted or by a set of sensors which are selected one at a time in a way that maximizes the information acquisition from the process. Similar to conventional POMDP problems, in this model the control action is based on all past measurements; however here this action is not for the control of state process, which is autonomous, but it is for influencing the measurement of that process. This POMDP is a controlled version of the hidden Markov process, and we show that its optimal observability problem can be formulated as an average cost Markov decision process (MDP) scheduling problem. In this problem, a policy is a rule for selecting sensors or adjusting the measuring device based on the measurement history. Given a policy, we can evaluate the estimation entropy for the joint state-measurement processes which inversely measures the observability of state process for that policy. Considering estimation entropy as the cost of a policy, we show that the problem of finding optimal policy is equivalent to an average cost MDP scheduling problem where the cost function is the entropy function over the belief space. This allows the application of the policy iteration algorithm for finding the policy achieving minimum estimation entropy, thus optimum observability.Comment: 5 pages, submitted to 2007 IEEE PerCom/PerSeNS conferenc

    The Entropy of a Binary Hidden Markov Process

    Full text link
    The entropy of a binary symmetric Hidden Markov Process is calculated as an expansion in the noise parameter epsilon. We map the problem onto a one-dimensional Ising model in a large field of random signs and calculate the expansion coefficients up to second order in epsilon. Using a conjecture we extend the calculation to 11th order and discuss the convergence of the resulting series

    Spectral Simplicity of Apparent Complexity, Part II: Exact Complexities and Complexity Spectra

    Full text link
    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainty, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. An appendix presents spectral decomposition calculations for one example in full detail.Comment: 27 pages, 12 figures, 2 tables; most recent version at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt2.ht

    On Hidden Markov Processes with Infinite Excess Entropy

    Full text link
    We investigate stationary hidden Markov processes for which mutual information between the past and the future is infinite. It is assumed that the number of observable states is finite and the number of hidden states is countably infinite. Under this assumption, we show that the block mutual information of a hidden Markov process is upper bounded by a power law determined by the tail index of the hidden state distribution. Moreover, we exhibit three examples of processes. The first example, considered previously, is nonergodic and the mutual information between the blocks is bounded by the logarithm of the block length. The second example is also nonergodic but the mutual information between the blocks obeys a power law. The third example obeys the power law and is ergodic.Comment: 12 page
    corecore