1,042 research outputs found

    Comparison of discretization strategies for the model-free information-theoretic assessment of short-term physiological interactions

    Get PDF
    This work presents a comparison between different approaches for the model-free estimation of information-theoretic measures of the dynamic coupling between short realizations of random processes. The measures considered are the mutual information rate (MIR) between two random processes X and Y and the terms of its decomposition evidencing either the individual entropy rates of X and Y and their joint entropy rate, or the transfer entropies from X to Y and from Y to X and the instantaneous information shared by X and Y. All measures are estimated through discretization of the random variables forming the processes, performed either via uniform quantization (binning approach) or rank ordering (permutation approach). The binning and permutation approaches are compared on simulations of two coupled non-identical Hènon systems and on three datasets, including short realizations of cardiorespiratory (CR, heart period and respiration flow), cardiovascular (CV, heart period and systolic arterial pressure), and cerebrovascular (CB, mean arterial pressure and cerebral blood flow velocity) measured in different physiological conditions, i.e., spontaneous vs paced breathing or supine vs upright positions. Our results show that, with careful selection of the estimation parameters (i.e., the embedding dimension and the number of quantization levels for the binning approach), meaningful patterns of the MIR and of its components can be achieved in the analyzed systems. On physiological time series, we found that paced breathing at slow breathing rates induces less complex and more coupled CR dynamics, while postural stress leads to unbalancing of CV interactions with prevalent baroreflex coupling and to less complex pressure dynamics with preserved CB interactions. These results are better highlighted by the permutation approach, thanks to its more parsimonious representation of the discretized dynamic patterns, which allows one to explore interactions with longer memory while limiting the curse of dimensionality

    Local and global measures of information storage for the assessment of heartbeat-evoked cortical responses

    Get PDF
    Objective: Brain–heart interactions involve bidirectional effects produced by bottom-up input at each heartbeat, and top-down neural regulatory responses of the brain. While the cortical processing of the heartbeat is usually investigated through the analysis of the Heartbeat Evoked Potential, in this study we propose an alternative approach based on the variability in the predictability of the brain dynamics induced by the heartbeat. Methods: In a group of eighteen subjects in whom simultaneous recording of the electroencephalogram (EEG) and electrocardiogram was performed in a resting-state, we analyzed the temporal profile of the local Information Storage (IS) to detect changes in the regularity of EEG signals in time windows associated with different phases of the cardiac cycle at rest. Results: The average values of the local IS were significantly higher in the parieto-occipital areas of the scalp, suggesting an activation of the Default Mode Network, regardless of the cardiac cycle phase. In contrast, the variability of the local IS showed marked differences across the cardiac cycle phases. Conclusion: Our results suggest that cardiac activity influences the predictive information of EEG dynamics differently in the various phases of the cardiac cycle. Significance: The variability of local IS measures can represent a useful index to identify spatio-temporal dynamics within the neurocardiac system, which generally remain overlooked by the more widely employed global measures

    Feasibility of Ultra-Short-Term Analysis of Heart Rate and Systolic Arterial Pressure Variability at Rest and during Stress via Time-Domain and Entropy-Based Measures

    Get PDF
    Heart Rate Variability (HRV) and Blood Pressure Variability (BPV) are widely employed tools for characterizing the complex behavior of cardiovascular dynamics. Usually, HRV and BPV analyses are carried out through short-term (ST) measurements, which exploit ~five-minute-long recordings. Recent research efforts are focused on reducing the time series length, assessing whether and to what extent Ultra-Short-Term (UST) analysis is capable of extracting information about cardiovascular variability from very short recordings. In this work, we compare ST and UST measures computed on electrocardiographic R-R intervals and systolic arterial pressure time series obtained at rest and during both postural and mental stress. Standard time–domain indices are computed, together with entropy-based measures able to assess the regularity and complexity of cardiovascular dynamics, on time series lasting down to 60 samples, employing either a faster linear parametric estimator or a more reliable but time-consuming model-free method based on nearest neighbor estimates. Our results are evidence that shorter time series down to 120 samples still exhibit an acceptable agreement with the ST reference and can also be exploited to discriminate between stress and rest. Moreover, despite neglecting nonlinearities inherent to short-term cardiovascular dynamics, the faster linear estimator is still capable of detecting differences among the conditions, thus resulting in its suitability to be implemented on wearable devices

    Comparison of Linear Model-Based and Nonlinear Model-Free Directional Coupling Measures: Analysis of Cardiovascular and Cardiorespiratory Interactions at Rest and During Physiological Stress

    Get PDF
    In this work, we present an investigation of the cardiovascular and cardiorespiratory regulatory mechanisms involved during stress responses using the information-theoretic measure of transfer entropy (TE). Specifically, the aim of the study is to compare different estimation approaches for the evaluation of the information transferred among different physiological systems. The analysis was carried out on the series of heart period, systolic arterial pressure and respiration measured from 61 young healthy subjects, at rest and during orthostatic and mental stress states, by using both a linear model-based and a nonlinear modelfree approaches to compute TE. The results reveal mostly significant correlations for the measures of TE estimated with the two approaches, particularly when assessing the influence of respiration on cardiovascular activity during mental stress and the influence of vascular dynamics on cardiac activity during tilt. Therefore, our findings suggest that the simpler linear parametric approach is suitable in conditions predominantly regulated by sympathetic nervous system or by the withdrawal of the parasympathetic system

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems

    Measuring information-transfer delays

    Get PDF
    In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON
    corecore