5,537 research outputs found

    Entropy production and Kullback-Leibler divergence between stationary trajectories of discrete systems

    Full text link
    The irreversibility of a stationary time series can be quantified using the Kullback-Leibler divergence (KLD) between the probability to observe the series and the probability to observe the time-reversed series. Moreover, this KLD is a tool to estimate entropy production from stationary trajectories since it gives a lower bound to the entropy production of the physical process generating the series. In this paper we introduce analytical and numerical techniques to estimate the KLD between time series generated by several stochastic dynamics with a finite number of states. We examine the accuracy of our estimators for a specific example, a discrete flashing ratchet, and investigate how close is the KLD to the entropy production depending on the number of degrees of freedom of the system that are sampled in the trajectories.Comment: 14 pages, 7 figure

    Entropy-based parametric estimation of spike train statistics

    Full text link
    We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : "Reading out the code" consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : "are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ?" A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.Comment: 37 pages, 8 figures, submitte

    Stochastic time-evolution, information geometry and the Cramer-Rao Bound

    Get PDF
    We investigate the connection between the time-evolution of averages of stochastic quantities and the Fisher information and its induced statistical length. As a consequence of the Cramer-Rao bound, we find that the rate of change of the average of any observable is bounded from above by its variance times the temporal Fisher information. As a consequence of this bound, we obtain a speed limit on the evolution of stochastic observables: Changing the average of an observable requires a minimum amount of time given by the change in the average squared, divided by the fluctuations of the observable times the thermodynamic cost of the transformation. In particular for relaxation dynamics, which do not depend on time explicitly, we show that the Fisher information is a monotonically decreasing function of time and that this minimal required time is determined by the initial preparation of the system. We further show that the monotonicity of the Fisher information can be used to detect hidden variables in the system and demonstrate our findings for simple examples of continuous and discrete random processes.Comment: 25 pages, 4 figure

    Comparing Probabilistic Models for Melodic Sequences

    Get PDF
    Modelling the real world complexity of music is a challenge for machine learning. We address the task of modeling melodic sequences from the same music genre. We perform a comparative analysis of two probabilistic models; a Dirichlet Variable Length Markov Model (Dirichlet-VMM) and a Time Convolutional Restricted Boltzmann Machine (TC-RBM). We show that the TC-RBM learns descriptive music features, such as underlying chords and typical melody transitions and dynamics. We assess the models for future prediction and compare their performance to a VMM, which is the current state of the art in melody generation. We show that both models perform significantly better than the VMM, with the Dirichlet-VMM marginally outperforming the TC-RBM. Finally, we evaluate the short order statistics of the models, using the Kullback-Leibler divergence between test sequences and model samples, and show that our proposed methods match the statistics of the music genre significantly better than the VMM.Comment: in Proceedings of the ECML-PKDD 2011. Lecture Notes in Computer Science, vol. 6913, pp. 289-304. Springer (2011

    Efficient training algorithms for HMMs using incremental estimation

    Get PDF
    Typically, parameter estimation for a hidden Markov model (HMM) is performed using an expectation-maximization (EM) algorithm with the maximum-likelihood (ML) criterion. The EM algorithm is an iterative scheme that is well-defined and numerically stable, but convergence may require a large number of iterations. For speech recognition systems utilizing large amounts of training material, this results in long training times. This paper presents an incremental estimation approach to speed-up the training of HMMs without any loss of recognition performance. The algorithm selects a subset of data from the training set, updates the model parameters based on the subset, and then iterates the process until convergence of the parameters. The advantage of this approach is a substantial increase in the number of iterations of the EM algorithm per training token, which leads to faster training. In order to achieve reliable estimation from a small fraction of the complete data set at each iteration, two training criteria are studied; ML and maximum a posteriori (MAP) estimation. Experimental results show that the training of the incremental algorithms is substantially faster than the conventional (batch) method and suffers no loss of recognition performance. Furthermore, the incremental MAP based training algorithm improves performance over the batch versio

    Editorial Comment on the Special Issue of "Information in Dynamical Systems and Complex Systems"

    Full text link
    This special issue collects contributions from the participants of the "Information in Dynamical Systems and Complex Systems" workshop, which cover a wide range of important problems and new approaches that lie in the intersection of information theory and dynamical systems. The contributions include theoretical characterization and understanding of the different types of information flow and causality in general stochastic processes, inference and identification of coupling structure and parameters of system dynamics, rigorous coarse-grain modeling of network dynamical systems, and exact statistical testing of fundamental information-theoretic quantities such as the mutual information. The collective efforts reported herein reflect a modern perspective of the intimate connection between dynamical systems and information flow, leading to the promise of better understanding and modeling of natural complex systems and better/optimal design of engineering systems
    corecore