1,642 research outputs found

    Increment entropy as a measure of complexity for time series

    Full text link
    Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce increment entropy to measure the complexity of time series in which each increment is mapped into a word of two letters, one letter corresponding to direction and the other corresponding to magnitude. The Shannon entropy of the words is termed as increment entropy (IncrEn). Simulations on synthetic data and tests on epileptic EEG signals have demonstrated its ability of detecting the abrupt change, regardless of energetic (e.g. spikes or bursts) or structural changes. The computation of IncrEn does not make any assumption on time series and it can be applicable to arbitrary real-world data.Comment: 12pages,7figure,2 table

    Adaptive computation of multiscale entropy and its application in EEG signals for monitoring depth of anesthesia during surgery

    Get PDF
    Entropy as an estimate of complexity of the electroencephalogram is an effective parameter for monitoring the depth of anesthesia (DOA) during surgery. Multiscale entropy (MSE) is useful to evaluate the complexity of signals over different time scales. However, the limitation of the length of processed signal is a problem due to observing the variation of sample entropy (SE) on different scales. In this study, the adaptive resampling procedure is employed to replace the process of coarse-graining in MSE. According to the analysis of various signals and practical EEG signals, it is feasible to calculate the SE from the adaptive resampled signals, and it has the highly similar results with the original MSE at small scales. The distribution of the MSE of EEG during the whole surgery based on adaptive resampling process is able to show the detailed variation of SE in small scales and complexity of EEG, which could help anesthesiologists evaluate the status of patients.The Center for Dynamical Biomarkers and Translational Medicine, National Central University, Taiwan which is sponsored by National Science Council (Grant Number: NSC 100-2911-I-008-001). Also, it was supported by Chung-Shan Institute of Science & Technology in Taiwan (Grant Numbers: CSIST-095-V101 and CSIST-095-V102). Furthermore, it was supported by the National Science Foundation of China (No.50935005)

    Efficiency characterization of a large neuronal network: a causal information approach

    Get PDF
    When inhibitory neurons constitute about 40% of neurons they could have an important antinociceptive role, as they would easily regulate the level of activity of other neurons. We consider a simple network of cortical spiking neurons with axonal conduction delays and spike timing dependent plasticity, representative of a cortical column or hypercolumn with large proportion of inhibitory neurons. Each neuron fires following a Hodgkin-Huxley like dynamics and it is interconnected randomly to other neurons. The network dynamics is investigated estimating Bandt and Pompe probability distribution function associated to the interspike intervals and taking different degrees of inter-connectivity across neurons. More specifically we take into account the fine temporal ``structures'' of the complex neuronal signals not just by using the probability distributions associated to the inter spike intervals, but instead considering much more subtle measures accounting for their causal information: the Shannon permutation entropy, Fisher permutation information and permutation statistical complexity. This allows us to investigate how the information of the system might saturate to a finite value as the degree of inter-connectivity across neurons grows, inferring the emergent dynamical properties of the system.Comment: 26 pages, 3 Figures; Physica A, in pres

    Mixing Bandt-Pompe and Lempel-Ziv approaches: another way to analyze the complexity of continuous-states sequences

    Get PDF
    In this paper, we propose to mix the approach underlying Bandt-Pompe permutation entropy with Lempel-Ziv complexity, to design what we call Lempel-Ziv permutation complexity. The principle consists of two steps: (i) transformation of a continuous-state series that is intrinsically multivariate or arises from embedding into a sequence of permutation vectors, where the components are the positions of the components of the initial vector when re-arranged; (ii) performing the Lempel-Ziv complexity for this series of `symbols', as part of a discrete finite-size alphabet. On the one hand, the permutation entropy of Bandt-Pompe aims at the study of the entropy of such a sequence; i.e., the entropy of patterns in a sequence (e.g., local increases or decreases). On the other hand, the Lempel-Ziv complexity of a discrete-state sequence aims at the study of the temporal organization of the symbols (i.e., the rate of compressibility of the sequence). Thus, the Lempel-Ziv permutation complexity aims to take advantage of both of these methods. The potential from such a combined approach - of a permutation procedure and a complexity analysis - is evaluated through the illustration of some simulated data and some real data. In both cases, we compare the individual approaches and the combined approach.Comment: 30 pages, 4 figure

    Complexity of multi-dimensional spontaneous EEG decreases during propofol induced general anaesthesia

    Get PDF
    Emerging neural theories of consciousness suggest a correlation between a specific type of neural dynamical complexity and the level of consciousness: When awake and aware, causal interactions between brain regions are both integrated (all regions are to a certain extent connected) and differentiated (there is inhomogeneity and variety in the interactions). In support of this, recent work by Casali et al (2013) has shown that Lempel-Ziv complexity correlates strongly with conscious level, when computed on the EEG response to transcranial magnetic stimulation. Here we investigated complexity of spontaneous high-density EEG data during propofol-induced general anaesthesia. We consider three distinct measures: (i) Lempel-Ziv complexity, which is derived from how compressible the data are; (ii) amplitude coalition entropy, which measures the variability in the constitution of the set of active channels; and (iii) the novel synchrony coalition entropy (SCE), which measures the variability in the constitution of the set of synchronous channels. After some simulations on Kuramoto oscillator models which demonstrate that these measures capture distinct ‘flavours’ of complexity, we show that there is a robustly measurable decrease in the complexity of spontaneous EEG during general anaesthesia

    Delay Parameter Selection in Permutation Entropy Using Topological Data Analysis

    Full text link
    Permutation Entropy (PE) is a powerful tool for quantifying the predictability of a sequence which includes measuring the regularity of a time series. Despite its successful application in a variety of scientific domains, PE requires a judicious choice of the delay parameter Ï„\tau. While another parameter of interest in PE is the motif dimension nn, Typically nn is selected between 44 and 88 with 55 or 66 giving optimal results for the majority of systems. Therefore, in this work we focus solely on choosing the delay parameter. Selecting Ï„\tau is often accomplished using trial and error guided by the expertise of domain scientists. However, in this paper, we show that persistent homology, the flag ship tool from Topological Data Analysis (TDA) toolset, provides an approach for the automatic selection of Ï„\tau. We evaluate the successful identification of a suitable Ï„\tau from our TDA-based approach by comparing our results to a variety of examples in published literature

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON
    • …
    corecore