25,424 research outputs found

    Markovian Testing Equivalence and Exponentially Timed Internal Actions

    Full text link
    In the theory of testing for Markovian processes developed so far, exponentially timed internal actions are not admitted within processes. When present, these actions cannot be abstracted away, because their execution takes a nonzero amount of time and hence can be observed. On the other hand, they must be carefully taken into account, in order not to equate processes that are distinguishable from a timing viewpoint. In this paper, we recast the definition of Markovian testing equivalence in the framework of a Markovian process calculus including exponentially timed internal actions. Then, we show that the resulting behavioral equivalence is a congruence, has a sound and complete axiomatization, has a modal logic characterization, and can be decided in polynomial time

    On the Expressiveness of Markovian Process Calculi with Durational and Durationless Actions

    Full text link
    Several Markovian process calculi have been proposed in the literature, which differ from each other for various aspects. With regard to the action representation, we distinguish between integrated-time Markovian process calculi, in which every action has an exponentially distributed duration associated with it, and orthogonal-time Markovian process calculi, in which action execution is separated from time passing. Similar to deterministically timed process calculi, we show that these two options are not irreconcilable by exhibiting three mappings from an integrated-time Markovian process calculus to an orthogonal-time Markovian process calculus that preserve the behavioral equivalence of process terms under different interpretations of action execution: eagerness, laziness, and maximal progress. The mappings are limited to classes of process terms of the integrated-time Markovian process calculus with restrictions on parallel composition and do not involve the full capability of the orthogonal-time Markovian process calculus of expressing nondeterministic choices, thus elucidating the only two important differences between the two calculi: their synchronization disciplines and their ways of solving choices

    The Computational Structure of Spike Trains

    Full text link
    Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.Comment: Somewhat different format from journal version but same conten

    The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications

    Get PDF
    The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht

    A Potential Foundation for Emergent Space-Time

    Get PDF
    We present a novel derivation of both the Minkowski metric and Lorentz transformations from the consistent quantification of a causally ordered set of events with respect to an embedded observer. Unlike past derivations, which have relied on assumptions such as the existence of a 4-dimensional manifold, symmetries of space-time, or the constant speed of light, we demonstrate that these now familiar mathematics can be derived as the unique means to consistently quantify a network of events. This suggests that space-time need not be physical, but instead the mathematics of space and time emerges as the unique way in which an observer can consistently quantify events and their relationships to one another. The result is a potential foundation for emergent space-time.Comment: The paper was originally titled "The Physics of Events: A Potential Foundation for Emergent Space-Time". We changed the title (and abstract) to be more direct when the paper was accepted for publication at the Journal of Mathematical Physics. 24 pages, 15 figure

    Towards Quantifying Complexity with Quantum Mechanics

    Full text link
    While we have intuitive notions of structure and complexity, the formalization of this intuition is non-trivial. The statistical complexity is a popular candidate. It is based on the idea that the complexity of a process can be quantified by the complexity of its simplest mathematical model - the model that requires the least past information for optimal future prediction. Here we review how such models, known as Ï”\epsilon-machines can be further simplified through quantum logic, and explore the resulting consequences for understanding complexity. In particular, we propose a new measure of complexity based on quantum Ï”\epsilon-machines. We apply this to a simple system undergoing constant thermalization. The resulting quantum measure of complexity aligns more closely with our intuition of how complexity should behave.Comment: 10 pages, 6 figure, Published in the Focus Point on Quantum information and complexity edition of EPJ Plu

    Complexity Through Nonextensivity

    Full text link
    The problem of defining and studying complexity of a time series has interested people for years. In the context of dynamical systems, Grassberger has suggested that a slow approach of the entropy to its extensive asymptotic limit is a sign of complexity. We investigate this idea further by information theoretic and statistical mechanics techniques and show that these arguments can be made precise, and that they generalize many previous approaches to complexity, in particular unifying ideas from the physics literature with ideas from learning and coding theory; there are even connections of this statistical approach to algorithmic or Kolmogorov complexity. Moreover, a set of simple axioms similar to those used by Shannon in his development of information theory allows us to prove that the divergent part of the subextensive component of the entropy is a unique complexity measure. We classify time series by their complexities and demonstrate that beyond the `logarithmic' complexity classes widely anticipated in the literature there are qualitatively more complex, `power--law' classes which deserve more attention.Comment: summarizes and extends physics/000707
    • 

    corecore