3,027 research outputs found

    The metabolomic profile of gamma-irradiated human hepatoma and muscle cells reveals metabolic changes consistent with the Warburg effect

    Get PDF
    The two human cell lines HepG2 from hepatoma and HMCL-7304 from striated muscle were γ-irradiated with doses between 0 and 4 Gy. Abundant γH2AX foci were observed at 4 Gy after 4 h of culture post-irradiation. Sham-irradiated cells showed no γH2AX foci and therefore no signs of radiation-induced double-strand DNA breaks. Flow cytometry indicated that 41.5% of HepG2 cells were in G2/M and this rose statistically significantly with increasing radiation dose reaching a plateau at ∼47%. Cell lysates from both cell lines were subjected to metabolomic analysis using Gas Chromatography-Mass Spectrometry (GCMS). A total of 46 metabolites could be identified by GCMS in HepG2 cell lysates and 29 in HMCL-7304 lysates, most of which occurred in HepG2 cells. Principal Components Analysis (PCA) showed a clear separation of sham, 1, 2 and 4 Gy doses. Orthogonal Projection to Latent Structures-Discriminant Analysis (OPLS-DA) revealed elevations in intracellular lactate, alanine, glucose, glucose 6-phosphate, fructose and 5-oxoproline, which were found by univariate statistics to be highly statistically significantly elevated at both 2 and 4 Gy compared with sham irradiated cells. These findings suggested upregulation of cytosolic aerobic glycolysis (the Warburg effect), with potential shunting of glucose through aldose reductase in the polyol pathway, and consumption of reduced Glutathione (GSH) due to γ-irradiation. In HMCL-7304 myotubes, a putative Warburg effect was also observed only at 2 Gy, albeit a lesser magnitude than in HepG2 cells. It is anticipated that these novel metabolic perturbations following γ-irradiation of cultured cells will lead to a fuller understanding of the mechanisms of tissue damage following ionizing radiation exposure

    A statistical model of internet traffic.

    Get PDF
    PhDWe present a method to extract a time series (Number of Active Requests (NAR)) from web cache logs which serves as a transport level measurement of internet traffic. This series also reflects the performance or Quality of Service of a web cache. Using time series modelling, we interpret the properties of this kind of internet traffic and its effect on the performance perceived by the cache user. Our preliminary analysis of NAR concludes that this dataset is suggestive of a long-memory self-similar process but is not heavy-tailed. Having carried out more in-depth analysis, we propose a three stage modelling process of the time series: (i) a power transformation to normalise the data, (ii) a polynomial fit to approximate the general trend and (iii) a modelling of the residuals from the polynomial fit. We analyse the polynomial and show that the residual dataset may be modelled as a FARIMA(p, d, q) process. Finally, we use Canonical Variate Analysis to determine the most significant defining properties of our measurements and draw conclusions to categorise the differences in traffic properties between the various caches studied. We show that the strongest illustration of differences between the caches is shown by the short memory parameters of the FARIMA fit. We compare the differences revealed between our studied caches and draw conclusions on them. Several programs have been written in Perl and S programming languages for this analysis including totalqd.pl for NAR calculation, fullanalysis for general statistical analysis of the data and armamodel for FARIMA modelling

    A biomechanical analysis of the heavy sprint-style sled pull and comparison with the back squat

    Get PDF
    This study compared the biomechanical characteristics of the heavy sprint-style sled pull and squat. Six experienced male strongman athletes performed sled pulls and squats at 70% of their 1RM squat. Significant kinematic and kinetic differences were observed between the sled pull start and squat at the start of the concentric phase and at maximum knee extension. The first stride of the heavy sled pull demonstrated significantly (

    A toolkit for measurement error correction, with a focus on nutritional epidemiology.

    Get PDF
    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset

    Using full-cohort data in nested case-control and case-cohort studies by multiple imputation.

    No full text
    In many large prospective cohorts, expensive exposure measurements cannot be obtained for all individuals. Exposure-disease association studies are therefore often based on nested case-control or case-cohort studies in which complete information is obtained only for sampled individuals. However, in the full cohort, there may be a large amount of information on cheaply available covariates and possibly a surrogate of the main exposure(s), which typically goes unused. We view the nested case-control or case-cohort study plus the remainder of the cohort as a full-cohort study with missing data. Hence, we propose using multiple imputation (MI) to utilise information in the full cohort when data from the sub-studies are analysed. We use the fully observed data to fit the imputation models. We consider using approximate imputation models and also using rejection sampling to draw imputed values from the true distribution of the missing values given the observed data. Simulation studies show that using MI to utilise full-cohort information in the analysis of nested case-control and case-cohort studies can result in important gains in efficiency, particularly when a surrogate of the main exposure is available in the full cohort. In simulations, this method outperforms counter-matching in nested case-control studies and a weighted analysis for case-cohort studies, both of which use some full-cohort information. Approximate imputation models perform well except when there are interactions or non-linear terms in the outcome model, where imputation using rejection sampling works well

    Prevalence and risk factors of sarcopenia among adults living in nursing homes

    Get PDF
    Objectives: Sarcopenia is a progressive loss of skeletal muscle and muscle function, with significant healthand disability consequences for older adults. We aimed to evaluate the prevalence and risk factors ofsarcopenia among older residential aged care adults using the European Working Group on Sarcopeniain Older People (EWGSOP) criteria.Study design: A cross-sectional study design that assessed older people (n = 102, mean age 84.5 ± 8.2 years)residing in 11 long-term nursing homes in Australia.Main outcome measurements: Sarcopenia was diagnosed from assessments of skeletal mass index bybioelectrical impedance analysis, muscle strength by handheld dynamometer, and physical performanceby the 2.4 m habitual walking speed test. Secondary variables where collected to inform a risk factoranalysis.Results: Forty one (40.2%) participants were diagnosed as sarcopenic, 38 (95%) of whom were categorizedas having severe sarcopenia. Univariate logistic regression found that body mass index (BMI) (Oddsratio (OR) = 0.86; 95% confidence interval (CI) 0.78–0.94), low physical performance (OR = 0.83; 95% CI0.69–1.00), nutritional status (OR = 0.19; 95% CI 0.05–0.68) and sitting time (OR = 1.18; 95% CI 1.00–1.39)were predictive of sarcopenia. With multivariate logistic regression, only low BMI (OR = 0.80; 95% CI0.65–0.97) remained predictive.Conclusions: The prevalence of sarcopenia among older residential aged care adults is very high. Inaddition, low BMI is a predictive of sarcopenia

    Handling missing data in matched case-control studies using multiple imputation.

    Get PDF
    Analysis of matched case-control studies is often complicated by missing data on covariates. Analysis can be restricted to individuals with complete data, but this is inefficient and may be biased. Multiple imputation (MI) is an efficient and flexible alternative. We describe two MI approaches. The first uses a model for the data on an individual and includes matching variables; the second uses a model for the data on a whole matched set and avoids the need to model the matching variables. Within each approach, we consider three methods: full-conditional specification (FCS), joint model MI using a normal model, and joint model MI using a latent normal model. We show that FCS MI is asymptotically equivalent to joint model MI using a restricted general location model that is compatible with the conditional logistic regression analysis model. The normal and latent normal imputation models are not compatible with this analysis model. All methods allow for multiple partially-observed covariates, non-monotone missingness, and multiple controls per case. They can be easily applied in standard statistical software and valid variance estimates obtained using Rubin's Rules. We compare the methods in a simulation study. The approach of including the matching variables is most efficient. Within each approach, the FCS MI method generally yields the least-biased odds ratio estimates, but normal or latent normal joint model MI is sometimes more efficient. All methods have good confidence interval coverage. Data on colorectal cancer and fibre intake from the EPIC-Norfolk study are used to illustrate the methods, in particular showing how efficiency is gained relative to just using individuals with complete data

    Simulating data from marginal structural models for a survival time outcome

    Full text link
    Marginal structural models (MSMs) are often used to estimate causal effects of treatments on survival time outcomes from observational data when time-dependent confounding may be present. They can be fitted using, e.g., inverse probability of treatment weighting (IPTW). It is important to evaluate the performance of statistical methods in different scenarios, and simulation studies are a key tool for such evaluations. In such simulation studies, it is common to generate data in such a way that the model of interest is correctly specified, but this is not always straightforward when the model of interest is for potential outcomes, as is an MSM. Methods have been proposed for simulating from MSMs for a survival outcome, but these methods impose restrictions on the data-generating mechanism. Here we propose a method that overcomes these restrictions. The MSM can be a marginal structural logistic model for a discrete survival time or a Cox or additive hazards MSM for a continuous survival time. The hazard of the potential survival time can be conditional on baseline covariates, and the treatment variable can be discrete or continuous. We illustrate the use of the proposed simulation algorithm by carrying out a brief simulation study. This study compares the coverage of confidence intervals calculated in two different ways for causal effect estimates obtained by fitting an MSM via IPTW.Comment: 29 pages, 2 figure

    Albumin Affinity Biomaterial Surfaces

    Get PDF
    Recently, considerable progress has been made in designing biomaterial surfaces which possess enhanced albumin affinity. Two derivatization methods for producing albumin binding biomaterial surfaces, based on an albumin affinity dye, cibacron blue, have been developed. Both surface derivatization methods were found to enhance the binding of albumin to an implant grade polyetherurethane. Evaluations of the enhanced albumin affinity demonstrated the binding to be both selective and reversible. Surfaces having such enhanced albumin affinity were found to be minimally thrombogenic and to discourage the adhesion of bacteria which might otherwise cause device-centered infections. We conclude that albumin affinity surfaces, such as these, may be useful in the design of non-thrombogenic and infection resistant biomaterials
    • …
    corecore