817 research outputs found

    Informational and Causal Architecture of Continuous-time Renewal and Hidden Semi-Markov Processes

    Full text link
    We introduce the minimal maximally predictive models ({\epsilon}-machines) of processes generated by certain hidden semi-Markov models. Their causal states are either hybrid discrete-continuous or continuous random variables and causal-state transitions are described by partial differential equations. Closed-form expressions are given for statistical complexities, excess entropies, and differential information anatomy rates. We present a complete analysis of the {\epsilon}-machines of continuous-time renewal processes and, then, extend this to processes generated by unifilar hidden semi-Markov models and semi-Markov models. Our information-theoretic analysis leads to new expressions for the entropy rate and the rates of related information measures for these very general continuous-time process classes.Comment: 16 pages, 7 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/ctrp.ht

    Informational and Causal Architecture of Discrete-Time Renewal Processes

    Full text link
    Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate the historical memory capacity required to store those states (statistical complexity), delineate what information is predictable (excess entropy), and decompose the entropy of a single measurement into that shared with the past, future, or both. The causal state equivalence relation defines a new subclass of renewal processes with a finite number of causal states despite having an unbounded interevent count distribution. We use these formulae to analyze the output of the parametrized Simple Nonunifilar Source, generated by a simple two-state hidden Markov model, but with an infinite-state epsilon-machine presentation. All in all, the results lay the groundwork for analyzing processes with infinite statistical complexity and infinite excess entropy.Comment: 18 pages, 9 figures, 1 table; http://csc.ucdavis.edu/~cmg/compmech/pubs/dtrp.ht

    Structure and Randomness of Continuous-Time Discrete-Event Processes

    Full text link
    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models---memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects ({\epsilon}-machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.Comment: 10 pages, 2 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/ctdep.ht

    Statistical Signatures of Structural Organization: The case of long memory in renewal processes

    Full text link
    Identifying and quantifying memory are often critical steps in developing a mechanistic understanding of stochastic processes. These are particularly challenging and necessary when exploring processes that exhibit long-range correlations. The most common signatures employed rely on second-order temporal statistics and lead, for example, to identifying long memory in processes with power-law autocorrelation function and Hurst exponent greater than 1/21/2. However, most stochastic processes hide their memory in higher-order temporal correlations. Information measures---specifically, divergences in the mutual information between a process' past and future (excess entropy) and minimal predictive memory stored in a process' causal states (statistical complexity)---provide a different way to identify long memory in processes with higher-order temporal correlations. However, there are no ergodic stationary processes with infinite excess entropy for which information measures have been compared to autocorrelation functions and Hurst exponents. Here, we show that fractal renewal processes---those with interevent distribution tails tα\propto t^{-\alpha}---exhibit long memory via a phase transition at α=1\alpha = 1. Excess entropy diverges only there and statistical complexity diverges there and for all α<1\alpha < 1. When these processes do have power-law autocorrelation function and Hurst exponent greater than 1/21/2, they do not have divergent excess entropy. This analysis breaks the intuitive association between these different quantifications of memory. We hope that the methods used here, based on causal states, provide some guide as to how to construct and analyze other long memory processes.Comment: 13 pages, 2 figures, 3 appendixes; http://csc.ucdavis.edu/~cmg/compmech/pubs/lrmrp.ht

    Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality

    Full text link
    The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., τ\tau-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht

    The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications

    Get PDF
    The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht

    Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    Full text link
    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.Comment: 13 pages, 8 figures, title changed from original versio

    Nearly maximally predictive features and their dimensions

    Get PDF
    Scientific explanation often requires inferring maximally predictive features from a given data set. Unfortunately, the collection of minimal maximally predictive features for most stochastic processes is uncountably infinite. In such cases, one compromises and instead seeks nearly maximally predictive features. Here, we derive upper bounds on the rates at which the number and the coding cost of nearly maximally predictive features scale with desired predictive power. The rates are determined by the fractal dimensions of a process' mixed-state distribution. These results, in turn, show how widely used finite-order Markov models can fail as predictors and that mixed-state predictive features can offer a substantial improvement.United States. Army Research Office (W911NF-13-1-0390)United States. Army Research Office (W911NF-12-1- 0288

    Beyond the Spectral Theorem: Spectrally Decomposing Arbitrary Functions of Nondiagonalizable Operators

    Full text link
    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, often the linear operator techniques that one would then use simply fail since the operators cannot be diagonalized. This curse is well known. It also occurs for finite-dimensional linear operators. We circumvent it by developing a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. It extends the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics are relevant, including memoryful stochastic processes, open non unitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator. In particular, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a general method to construct it. We provide new formulae for constructing projection operators and delineate the relations between projection operators, eigenvectors, and generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples.Comment: 29 pages, 4 figures, expanded historical citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/bst.ht
    corecore