200 research outputs found

    Measuring Information Transfer

    Full text link
    An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish driving and responding elements and to detect asymmetry in the coupling of subsystems.Comment: 4 pages, 4 Figures, Revte

    Grouping time series by pairwise measures of redundancy

    Full text link
    A novel approach is proposed to group redundant time series in the frame of causality. It assumes that (i) the dynamics of the system can be described using just a small number of characteristic modes, and that (ii) a pairwise measure of redundancy is sufficient to elicit the presence of correlated degrees of freedom. We show the application of the proposed approach on fMRI data from a resting human brain and gene expression profiles from HeLa cell culture.Comment: 4 pages, 8 figure

    On directed information theory and Granger causality graphs

    Full text link
    Directed information theory deals with communication channels with feedback. When applied to networks, a natural extension based on causal conditioning is needed. We show here that measures built from directed information theory in networks can be used to assess Granger causality graphs of stochastic processes. We show that directed information theory includes measures such as the transfer entropy, and that it is the adequate information theoretic framework needed for neuroscience applications, such as connectivity inference problems.Comment: accepted for publications, Journal of Computational Neuroscienc

    Climate Dynamics: A Network-Based Approach for the Analysis of Global Precipitation

    Get PDF
    Precipitation is one of the most important meteorological variables for defining the climate dynamics, but the spatial patterns of precipitation have not been fully investigated yet. The complex network theory, which provides a robust tool to investigate the statistical interdependence of many interacting elements, is used here to analyze the spatial dynamics of annual precipitation over seventy years (1941-2010). The precipitation network is built associating a node to a geographical region, which has a temporal distribution of precipitation, and identifying possible links among nodes through the correlation function. The precipitation network reveals significant spatial variability with barely connected regions, as Eastern China and Japan, and highly connected regions, such as the African Sahel, Eastern Australia and, to a lesser extent, Northern Europe. Sahel and Eastern Australia are remarkably dry regions, where low amounts of rainfall are uniformly distributed on continental scales and small-scale extreme events are rare. As a consequence, the precipitation gradient is low, making these regions well connected on a large spatial scale. On the contrary, the Asiatic South-East is often reached by extreme events such as monsoons, tropical cyclones and heat waves, which can all contribute to reduce the correlation to the short-range scale only. Some patterns emerging between mid-latitude and tropical regions suggest a possible impact of the propagation of planetary waves on precipitation at a global scale. Other links can be qualitatively associated to the atmospheric and oceanic circulation. To analyze the sensitivity of the network to the physical closeness of the nodes, short-term connections are broken. The African Sahel, Eastern Australia and Northern Europe regions again appear as the supernodes of the network, confirming furthermore their long-range connection structure. Almost all North-American and Asian nodes vanish, revealing that extreme events can enhance high precipitation gradients, leading to a systematic absence of long-range patterns

    A quantitative comparison of different methods to detect cardiorespiratory coordination during night-time sleep

    Get PDF
    BACKGROUND: The univariate approaches used to analyze heart rate variability have recently been extended by several bivariate approaches with respect to cardiorespiratory coordination. Some approaches are explicitly based on mathematical models which investigate the synchronization between weakly coupled complex systems. Others use an heuristic approach, i.e. characteristic features of both time series, to develop appropriate bivariate methods. OBJECTIVE: In this study six different methods used to analyze cardiorespiratory coordination have been quantitatively compared with respect to their performance (no. of sequences with cardiorespiratory coordination, no. of heart beats coordinated with respiration). Five of these approaches have been suggested in the recent literature whereas one method originates from older studies. RESULTS: The methods were applied to the simultaneous recordings of an electrocardiogram and a respiratory trace of 20 healthy subjects during night-time sleep from 0:00 to 6:00. The best temporal resolution and the highest number of coordinated heart beats were obtained with the analysis of 'Phase Recurrences'. Apart from the oldest method, all methods showed similar qualitative results although the quantities varied between the different approaches. In contrast, the oldest method detected considerably fewer coordinated heart beats since it only used part of the maximum amount of information available in each recording. CONCLUSIONS: The method of 'Phase Recurrences' should be the method of choice for the detection of cardiorespiratory coordination since it offers the best temporal resolution and the highest number of coordinated sequences and heart beats. Excluding the oldest method, the results of the heuristic approaches may also be interpreted in terms of the mathematical models

    Near infrared hyperspectral imaging for forensic analysis of document forgery

    Full text link
    [EN] Hyperspectral images in the near infrared range (HSI-NIR) were evaluated as a nondestructive method to detect fraud in documents. Three different types of typical forgeries were simulated by (a) obliterating text, (b) adding text and (c) approaching the crossing lines problem. The simulated samples were imaged in the range of 928 2524 nm with spectral and spatial resolutions of 6.3 nm and 10 mm, respectively. After data pre-processing, different chemometric techniques were evaluated for each type of forgery. Principal component analysis (PCA) was performed to elucidate the first two types of adulteration, (a) and (b). Moreover, Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) was used in an attempt to improve the results of the type (a) obliteration and type (b) adding text problems. Finally, MCR-ALS and Partial Least Squares Discriminant Analysis (PLS-DA), employed as a variable selection tool, were used to study the type (c) forgeries, i.e. crossing lines problem. Type (a) forgeries (obliterating text) were successfully identified in 43% of the samples using both the chemometric methods (PCA and MCR-ALS). Type (b) forgeries (adding text) were successfully identified in 82% of the samples using both the methods (PCA and MCR-ALS). Finally, type (c) forgeries (crossing lines) were successfully identified in 85% of the samples. The results demonstrate the potential of HSI-NIR associated with chemometric tools to support document forgery identificationINCTAA (Processes no. : CNPq 573894/2008-6; FAPESP 2008/57808-1), NUQAAPE, FACEPE, CNPq, CAPES, Spanish Ministry of Science and Innovation MICINN (grant DPI2011-28112-C04-02).Silva, CS.; Pimentel, MF.; Honorato, RS.; Pasquini, C.; Prats Montalbán, JM.; Ferrer Riquelme, AJ. (2014). Near infrared hyperspectral imaging for forensic analysis of document forgery. Analyst. 139(20):5176-5184. https://doi.org/10.1039/C4AN00961DS517651841392

    An ECVAG† trial on assessment of oxidative damage to DNA measured by the comet assay

    Get PDF
    The increasing use of single cell gel electrophoresis (the comet assay) highlights its popularity as a method for detecting DNA damage, including the use of enzymes for assessment of oxidatively damaged DNA. However, comparison of DNA damage levels between laboratories can be difficult due to differences in assay protocols (e.g. lysis conditions, enzyme treatment, the duration of the alkaline treatment and electrophoresis) and in the end points used for reporting results (e.g. %DNA in tail, arbitrary units, tail moment and tail length). One way to facilitate comparisons is to convert primary comet assay end points to number of lesions/106 bp by calibration with ionizing radiation. The aim of this study was to investigate the inter-laboratory variation in assessment of oxidatively damaged DNA by the comet assay in terms of oxidized purines converted to strand breaks with formamidopyrimidine DNA glycosylase (FPG). Coded samples with DNA oxidation damage induced by treatment with different concentrations of photosensitizer (Ro 19-8022) plus light and calibration samples irradiated with ionizing radiation were distributed to the 10 participating laboratories to measure DNA damage using their own comet assay protocols. Nine of 10 laboratories reported the same ranking of the level of damage in the coded samples. The variation in assessment of oxidatively damaged DNA was largely due to differences in protocols. After conversion of the data to lesions/106 bp using laboratory-specific calibration curves, the variation between the laboratories was reduced. The contribution of the concentration of photosensitizer to the variation in net FPG-sensitive sites increased from 49 to 73%, whereas the inter-laboratory variation decreased. The participating laboratories were successful in finding a dose–response of oxidatively damaged DNA in coded samples, but there remains a need to standardize the protocols to enable direct comparisons between laboratories

    The Effect of Selenium Supplementation in the Prevention of DNA Damage in White Blood Cells of Hemodialyzed Patients: A Pilot Study

    Get PDF
    Patients with chronic kidney disease (CKD) have an increased incidence of cancer. It is well known that long periods of hemodialysis (HD) treatment are linked to DNA damage due to oxidative stress. In this study, we examined the effect of selenium (Se) supplementation to CKD patients on HD on the prevention of oxidative DNA damage in white blood cells. Blood samples were drawn from 42 CKD patients on HD (at the beginning of the study and after 1 and 3 months) and from 30 healthy controls. Twenty-two patients were supplemented with 200 μg Se (as Se-rich yeast) per day and 20 with placebo (baker's yeast) for 3 months. Se concentration in plasma and DNA damage in white blood cells expressed as the tail moment, including single-strand breaks (SSB) and oxidative bases lesion in DNA, using formamidopyrimidine glycosylase (FPG), were measured. Se concentration in patients was significantly lower than in healthy subjects (P < 0.0001) and increased significantly after 3 months of Se supplementation (P < 0.0001). Tail moment (SSB) in patients before the study was three times higher than in healthy subjects (P < 0.01). After 3 months of Se supplementation, it decreased significantly (P < 0.01) and was about 16% lower than in healthy subjects. The oxidative bases lesion in DNA (tail moment, FPG) of HD patients at the beginning of the study was significantly higher (P < 0.01) compared with controls, and 3 months after Se supplementation it was 2.6 times lower than in controls (P < 0.01). No changes in tail moment was observed in the placebo group. In conclusion, our study shows that in CKD patients on HD, DNA damage in white blood cells is higher than in healthy controls, and Se supplementation prevents the damage of DNA
    corecore