24 research outputs found

    ZeChipC: Time Series Interpolation Method Based on Lebesgue Sampling

    Get PDF
    In this paper, we present an interpolation method based on Lebesgue sampling that could help to develop systems based time series more efficiently. Our methods can transmit times series, frequently used in health monitoring, with the same level of accuracy but using much fewer data. Our method is based in Lebesgue sampling, which collects information depending on the values of the signal (e.g. the signal output is sampled when it crosses specific limits). Lebesgue sampling contains additional information about the shape of the signal in-between two sampled points. Using this information would allow generating an interpolated signal closer to the original one. In our contribution, we propose a novel time-series interpolation method designed explicitly for Lebesgue sampling called ZeChipC. ZeChipC is a combination of Zero-order hold and Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) interpolation. ZeChipC includes new functionality to adapt the reconstructed signal to concave/convex regions. The proposed methods have been compared with state-of-the-art interpolation methods using Lebesgue sampling and have offered higher average performance.Enterprise Irelan

    Climate change and the emergence of vector-borne diseases in Europe: Case study of dengue fever

    Get PDF
    Background: Dengue fever is the most prevalent mosquito-borne viral disease worldwide. Dengue transmission is critically dependent on climatic factors and there is much concern as to whether climate change would spread the disease to areas currently unaffected. The occurrence of autochthonous infections in Croatia and France in 2010 has raised concerns about a potential re-emergence of dengue in Europe. The objective of this study is to estimate dengue risk in Europe under climate change scenarios. Methods. We used a Generalized Additive Model (GAM) to estimate dengue fever risk as a function of climatic variables (maximum temperature, minimum temperature, precipitation, humidity) and socioeconomic factors (population density, urbanisation, GDP per capita and population size), under contemporary conditions (1985-2007) in Mexico. We then used our model estimates to project dengue incidence under baseline conditions (1961-1990) and three climate change scenarios: short-term 2011-2040, medium-term 2041-2070 and long-term 2071-2100 across Europe. The model was used to calculate average number of yearly dengue cases at a spatial resolution of 10 × 10 km grid covering all land surface of the currently 27 EU member states. To our knowledge, this is the first attempt to model dengue fever risk in Europe in terms of disease occurrence rather than mosquito presence. Results: The results were presented using Geographical Information System (GIS) and allowed identification of areas at high risk. Dengue fever hot spots were clustered around the coastal areas of the Mediterranean and Adriatic seas and the Po Valley in northern Italy. Conclusions: This risk assessment study is likely to be a valuable tool assisting effective and targeted adaptation responses to reduce the likely increased burden of dengue fever in a warmer world

    Measurement error adjustment in essential fatty acid intake from a food frequency questionnaire: alternative approaches and methods

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We aimed at assessing the degree of measurement error in essential fatty acid intakes from a food frequency questionnaire and the impact of correcting for such an error on precision and bias of odds ratios in logistic models. To assess these impacts, and for illustrative purposes, alternative approaches and methods were used with the binary outcome of cognitive decline in verbal fluency.</p> <p>Methods</p> <p>Using the Atherosclerosis Risk in Communities (ARIC) study, we conducted a sensitivity analysis. The error-prone exposure – visit 1 fatty acid intake (1987–89) – was available for 7,814 subjects 50 years or older at baseline with complete data on cognitive decline between visits 2 (1990–92) and 4 (1996–98). Our binary outcome of interest was clinically significant decline in verbal fluency. Point estimates and 95% confidence intervals were compared between naïve and measurement-error adjusted odds ratios of decline with every SD increase in fatty acid intake as % of energy. Two approaches were explored for adjustment: (A) External validation against biomarkers (plasma fatty acids in cholesteryl esters and phospholipids) and (B) Internal repeat measurements at visits 2 and 3. The main difference between the two is that Approach B makes a stronger assumption regarding lack of error correlations in the structural model. Additionally, we compared results from regression calibration (RCAL) to those from simulation extrapolation (SIMEX). Finally, using structural equations modeling, we estimated attenuation factors associated with each dietary exposure to assess degree of measurement error in a bivariate scenario for regression calibration of logistic regression model.</p> <p>Results and conclusion</p> <p>Attenuation factors for Approach A were smaller than B, suggesting a larger amount of measurement error in the dietary exposure. Replicate measures (Approach B) unlike concentration biomarkers (Approach A) may lead to imprecise odds ratios due to larger standard errors. Using SIMEX rather than RCAL models tends to preserve precision of odds ratios. We found in many cases that bias in naïve odds ratios was towards the null. RCAL tended to correct for a larger amount of effect bias than SIMEX, particularly for Approach A.</p

    a historic review

    No full text
    In Germany around 25 years ago nonparametric smoothing methods have found their way into statistics and with some delay also into biometry. In the early 1980's there has been what one might call a boom in theoretical and soon after also in computational statistics. The focus was on univariate nonparametric methods for density and curve estimation. For biometry however smoothing methods became really interesting in their multivariate version. This 'change of dimensionality' is still raising open methodological questions. No wonder that the simplifying paradigm of additive regression, realized in the generalized additive models (GAM), has initiated the success story of smoothing techniques starting in the early 1990's. In parallel there have been new algorithms and important software developments, primarily in the statistical programming languages S and R. Recent developments of smoothing techniques can be found in survival analysis, longitudinal analysis, mixed models and functional data analysis, partly integrating Bayesian concepts. All new are smoothing related statistical methods in bioinformatics.In this article we aim not only at a general historical overview but also try to sketch activities in the German-speaking world. Moreover, the current situation is critically examined. Finally a large number of relevant references is given.Vor rund 25 Jahren haben in Deutschland nichtparametrische Glättungsverfahren in die Statistik und etwas zeitverzögert auch in die Biometrie Eingang gefunden. In den frühen 1980er Jahren setzte ein regelrechter Boom in der theoretischen und bald auch rechenintensiven Statistik (engl. 'computational statistics') ein. Im Vordergrund standen univariate nichtparametrische Verfahren für die Dichte- und Kurvenschätzung. Wirklich interessant wurden Glättungsmethoden für die Biometrie jedoch erst in ihrer multivariaten Ausformung. Dieser 'Dimensionswechsel' wirft bis heute offene methodische Fragen auf. Es darf daher nicht wundern, dass das vereinfachende Paradigma der additiven Regression, realisiert in den generalisierten additiven Modellen (GAM), den Siegeszug der Glättungsverfahren Anfang der 1990er Jahre eingeleitet hat. Parallel dazu hat es neue Algorithmen und bedeutende Softwareentwicklungen, vor allem in den statistischen Programmiersprachen S und R, gegeben. Neuere Entwicklungen von Glättungsverfahren finden sich in der Überlebenszeitanalyse, der Longitudinalanalyse, den gemischten Modellen und in der Funktionaldatenanalyse, teilweise unter Einbeziehung Bayesianischer Konzepte. Ganz neu sind statistische Methoden mit Glättungsbezug in der Bioinformatik.In diesem Artikel wird nicht nur ein allgemeiner historischer Überblick gegeben, sondern auch versucht speziell die Aktivitäten im deutschsprachigen Raum zu skizzieren. Auch die derzeitige Situation wird einer kritischen Betrachtung unterzogen. Schlussendlich wird eine große Anzahl relevanter Literaturhinweise gegeben

    Concepts, Pitfalls and Practical Consequences

    No full text

    Partially Linear Models: A New Algorithm and some Simulation Results

    No full text

    A Note on Cross-Validation for Smoothing Splines

    No full text

    Human fetoplacental arterial and venous endothelial cells are differentially programmed by gestational diabetes mellitus, resulting in cell-specific barrier function changes

    Get PDF
    AIMS/HYPOTHESIS: An adverse intrauterine environment can result in permanent changes in the physiology of the offspring and predispose to diseases in adulthood. One such exposure, gestational diabetes mellitus (GDM), has been linked to development of metabolic disorders and cardiovascular disease in offspring. Epigenetic variation, including DNA methylation, is recognised as a leading mechanism underpinning fetal programming and we hypothesised that this plays a key role in fetoplacental endothelial dysfunction following exposure to GDM. Thus, we conducted a pilot epigenetic study to analyse concordant DNA methylation and gene expression changes in GDM-exposed fetoplacental endothelial cells. METHODS: Genome-wide methylation analysis of primary fetoplacental arterial endothelial cells (AEC) and venous endothelial cells (VEC) from healthy pregnancies and GDM-complicated pregnancies in parallel with transcriptome analysis identified methylation and expression changes. Most-affected pathways and functions were identified by Ingenuity Pathway Analysis and validated using functional assays. RESULTS: Transcriptome and methylation analyses identified variation in gene expression linked to GDM-associated DNA methylation in 408 genes in AEC and 159 genes in VEC, implying a direct functional link. Pathway analysis found that genes altered by exposure to GDM clustered to functions associated with 'cell morphology' and 'cellular movement' in healthy AEC and VEC. Further functional analysis demonstrated that GDM-exposed cells had altered actin organisation and barrier function. CONCLUSIONS/INTERPRETATION: Our data indicate that exposure to GDM programs atypical morphology and barrier function in fetoplacental endothelial cells by DNA methylation and gene expression change. The effects differ between AEC and VEC, indicating a stringent cell-specific sensitivity to adverse exposures associated with developmental programming in utero. DATA AVAILABILITY: DNA methylation and gene expression datasets generated and analysed during the current study are available at the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) database ( http://www.ncbi.nlm.nih.gov/geo ) under accession numbers GSE106099 and GSE103552, respectively
    corecore