533 research outputs found

    Bayesian uncertainty assessment of flood predictions in ungauged urban basins for conceptual rainfall-runoff models

    Get PDF
    Urbanization and the resulting land-use change strongly affect the water cycle and runoff-processes in watersheds. Unfortunately, small urban watersheds, which are most affected by urban sprawl, are mostly ungauged. This makes it intrinsically difficult to assess the consequences of urbanization. Most of all, it is unclear how to reliably assess the predictive uncertainty given the structural deficits of the applied models. In this study, we therefore investigate the uncertainty of flood predictions in ungauged urban basins from structurally uncertain rainfall-runoff models. To this end, we suggest a procedure to explicitly account for input uncertainty and model structure deficits using Bayesian statistics with a continuous-time autoregressive error model. In addition, we propose a concise procedure to derive prior parameter distributions from base data and successfully apply the methodology to an urban catchment in Warsaw, Poland. Based on our results, we are able to demonstrate that the autoregressive error model greatly helps to meet the statistical assumptions and to compute reliable prediction intervals. In our study, we found that predicted peak flows were up to 7 times higher than observations. This was reduced to 5 times with Bayesian updating, using only few discharge measurements. In addition, our analysis suggests that imprecise rainfall information and model structure deficits contribute mostly to the total prediction uncertainty. In the future, flood predictions in ungauged basins will become more important due to ongoing urbanization as well as anthropogenic and climatic changes. Thus, providing reliable measures of uncertainty is crucial to support decision making

    Influence of hereditary haemochromatosis on left ventricular wall thickness: does iron overload exacerbate cardiac hypertrophy?

    Get PDF
    Background: The left ventricular (LV) hypertrophy increases the risk of heart failure. Hypertension and infiltrative cardiomyopathies are the well-known reasons of LV hypertrophy. The growing interest of scientists in this issue affects hereditary haemochromatosis (HH), which is characterised by the excess deposition of iron mostly due to HFE gene mutation. The aim of our study was to investigate the possible influence of HH on LV parameters in patients with early-diagnosed (early HH) and long-lasting and long-treated (old HH) disease. Materials and methods: Thirty nine early HH and 19 old HH patients were prospectively enrolled in the study; age- and sex-matched healthy volunteers constituted the appropriate control groups. All participants had echocardiography performed (including three-dimension volume and mass analysis); the iron turnover parameters were measured at the time of enrolment in every HH patients. Results: Echocardiographic parameters regarding to left atrium (LA), LV thickness, mass and long axis length were significantly higher, whereas LV ejection fraction was lower in early HH in comparison to healthy persons. In old HH patients the differences were similar to those mentioned before, except LV ejection fraction. The presence of hypertension in both HH groups did not influence echo parameters, as well as diabetes in old HH. The strongest correlation in all HH group was found between the time from HH diagnosis and LA, LV thickness and volumes parameters, but the correlations between iron turnover and echo parameters were non-existent. Conclusions: Hereditary haemochromatosis, not only long-lasting, but also early-diagnosed, could lead to exacerbation of LV wall thickness and cardiac hypertrophy. This effect is not simply connected with hypertension and diabetes that are frequent additional diseases in these patients, but with the time from HH diagnosis

    Paraquat induces oxidative stress, neuronal loss in substantia nigra region and Parkinsonism in adult rats: Neuroprotection and amelioration of symptoms by water-soluble formulation of Coenzyme Q10

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Parkinson's disease, for which currently there is no cure, develops as a result of progressive loss of dopamine neurons in the brain; thus, identification of any potential therapeutic intervention for disease management is of a great importance.</p> <p>Results</p> <p>Here we report that prophylactic application of water-soluble formulation of coenzyme Q<sub>10 </sub>could effectively offset the effects of environmental neurotoxin paraquat, believed to be a contributing factor in the development of familial PD. In this study we utilized a model of paraquat-induced dopaminergic neurodegeneration in adult rats that received three weekly intra-peritoneal injections of the herbicide paraquat. Histological and biochemical analyses of rat brains revealed increased levels of oxidative stress markers and a loss of approximately 65% of dopamine neurons in the <it>substantia nigra </it>region. The paraquat-exposed rats also displayed impaired balancing skills on a slowly rotating drum (rotorod) evidenced by their reduced spontaneity in gait performance. In contrast, paraquat exposed rats receiving a water-soluble formulation of coenzyme Q<sub>10 </sub>in their drinking water prior to and during the paraquat treatment neither developed neurodegeneration nor reduced rotorod performance and were indistinguishable from the control paraquat-untreated rats.</p> <p>Conclusion</p> <p>Our data confirmed that paraquat-induced neurotoxicity represents a convenient rat model of Parkinsonian neurodegeneration suitable for mechanistic and neuroprotective studies. This is the first preclinical evaluation of a water-soluble coenzyme Q<sub>10 </sub>formulation showing the evidence of prophylactic neuroprotection at clinically relevant doses.</p

    GWAS on your notebook: Fast semi-parallel linear and logistic regression for genome-wide association studies

    Get PDF
    Background: Genome-wide association studies have become very popular in identifying genetic contributions to phenotypes. Millions of SNPs are being tested for their association with diseases and traits using linear or logistic regression models. This conceptually simple strategy encounters the following computational issues: a large number of tests and very large genotype files (many Gigabytes) which cannot be directly loaded into the software memory. One of the solutions applied on a grand scale is cluster computing involving large-scale resources. We show how to speed up the computations using matrix operations in pure R code.Results: We improve speed: computation time from 6 hours is reduced to 10-15 minutes. Our approach can handle essentially an unlimited amount of covariates efficiently, using projections. Data files in GWAS are vast and reading them into computer memory becomes an important issue. However, much improvement can be made if the data is structured beforehand in a way allowing for easy access to blocks of SNPs. We propose several solutions based on the R packages ff and ncdf.We adapted the semi-parallel computations for logistic regression. We show that in a typical GWAS setting, where SNP effects are very small, we do not lose any precision and our computations are few hundreds times faster than standard procedures.Conclusions: We provide very fast algorithms for GWAS written in pure R code. We also show how to rearrange SNP data for fast access

    Genome-wide Analysis of Large-scale Longitudinal Outcomes using Penalization - GALLOP algorithm

    Get PDF
    Genome-wide association studies (GWAS) with longitudinal phenotypes provide opportunities to identify genetic variations associated with changes in human traits over time. Mixed models are used to correct for the correlated nature of longitudinal data. GWA studies are notorious for their computational challenges, which are considerable when mixed models for thousands of individuals are fitted to millions of SNPs. We present a new algorithm that speeds up a genome-wide analysis of longitudinal data by several orders of magnitude. It solves the equivalent penalized least squares problem efficiently, computing variances in an initial step. Factorizations and transformations are used to avoid inversion of large matrices. Because the system of equations is bordered, we can re-use components, which can be precomputed for the mixed model without a SNP. Two SNP effects (main and its interaction with time) are obtained. Our method completes the analysis a thousand times faster than the R package lme4, providing an almost identical solution for the coefficients and p-values. We provide an R implementation of our algorithm

    Screening of antioxidant properties of the apple juice using the front-face synchronous fluorescence and chemometrics

    Get PDF
    Fluorescence spectroscopy is gaining increasing attention in food analysis due to its higher sensitivity and selectivity as compared to other spectroscopic techniques. Synchronous scanning fluorescence technique is particularly useful in studies of multi-fluorophoric food samples, providing a further improvement of selectivity by reduction in the spectral overlapping and suppressing light-scattering interferences. Presently, we study the feasibility of the prediction of the total phenolics, flavonoids, and antioxidant capacity using front-face synchronous fluorescence spectra of apple juices. Commercial apple juices from different product ranges were studied. Principal component analysis (PCA) applied to the unfolded synchronous fluorescence spectra was used to compare the fluorescence of the entire sample set. The regression analysis was performed using partial least squares (PLS1 and PLS2) methods on the unfolded total synchronous and on the single-offset synchronous fluorescence spectra. The best calibration models for all of the studied parameters were obtained using the PLS1 method for the single-offset synchronous spectra. The models for the prediction of the total flavonoid content had the best performance; the optimal model was obtained for the analysis of the synchronous fluorescence spectra at Delta lambda = 110 nm (R (2) = 0.870, residual predictive deviation (RPD) = 2.7). The optimal calibration models for the prediction of the total phenolic content (Delta lambda = 80 nm, R (2) = 0.766, RPD = 2.0) and the total antioxidant capacity (Delta lambda = 70 nm, R (2) = 0.787, RPD = 2.1) had only an approximate predictive ability. These results demonstrate that synchronous fluorescence could be a useful tool in fast semi-quantitative screening for the antioxidant properties of the apple juices.info:eu-repo/semantics/publishedVersio

    Homomorphic Training of 30,000 Logistic Regression Models

    Get PDF
    In this work, we demonstrate the use the CKKS homomorphic encryption scheme to train a large number of logistic regression models simultaneously, as needed to run a genome-wide association study (GWAS) on encrypted data. Our implementation can train more than 30,000 models (each with four features) in about 20 minutes. To that end, we rely on a similar iterative Nesterov procedure to what was used by Kim, Song, Kim, Lee, and Cheon to train a single model [KSKLC18]. We adapt this method to train many models simultaneously using the SIMD capabilities of the CKKS scheme. We also performed a thorough validation of this iterative method and evaluated its suitability both as a generic method for computing logistic regression models, and specifically for GWAS
    • 

    corecore