1,404 research outputs found

    Genes to predict VO2 max trainability: A systematic review

    Get PDF
    Cardiorespiratory fitness (VO2max) is an excellent predictor of chronic disease morbidity and mortality risk. Guidelines recommend individuals undertake exercise training to improve VO2max for chronic disease reduction. However, there are large inter-individual differences between exercise training responses. This systematic review is aimed at identifying genetic variants that are associated with VO2max trainability.Peer-reviewed research papers published up until October 2016 from four databases were examined. Articles were included if they examined genetic variants, incorporated a supervised aerobic exercise intervention; and measured VO2max/VO2peak pre and post-intervention.Thirty-five articles describing 15 cohorts met the criteria for inclusion. The majority of studies used a cross-sectional retrospective design. Thirty-two studies researched candidate genes, two used Genome-Wide Association Studies (GWAS), and one examined mRNA gene expression data, in addition to a GWAS. Across these studies, 97 genes to predict VO2max trainability were identified. Studies found phenotype to be dependent on several of these genotypes/variants, with higher responders to exercise training having more positive response alleles than lower responders (greater gene predictor score). Only 13 genetic variants were reproduced by more than two authors. Several other limitations were noted throughout these studies, including the robustness of significance for identified variants, small sample sizes, limited cohorts focused primarily on Caucasian populations, and minimal baseline data. These factors, along with differences in exercise training programs, diet and other environmental gene expression mediators, likely influence the ideal traits for VO2max trainability.Ninety-seven genes have been identified as possible predictors of VO2max trainability. To verify the strength of these findings and to identify if there are more genetic variants and/or mediators, further tightly-controlled studies that measure a range of biomarkers across ethnicities are required

    A review of RCTs in four medical journals to assess the use of imputation to overcome missing data in quality of life outcomes

    Get PDF
    Background: Randomised controlled trials (RCTs) are perceived as the gold-standard method for evaluating healthcare interventions, and increasingly include quality of life (QoL) measures. The observed results are susceptible to bias if a substantial proportion of outcome data are missing. The review aimed to determine whether imputation was used to deal with missing QoL outcomes. Methods: A random selection of 285 RCTs published during 2005/6 in the British Medical Journal, Lancet, New England Journal of Medicine and Journal of American Medical Association were identified. Results: QoL outcomes were reported in 61 (21%) trials. Six (10%) reported having no missing data, 20 (33%) reported ≤ 10% missing, eleven (18%) 11%–20% missing, and eleven (18%) reported >20% missing. Missingness was unclear in 13 (21%). Missing data were imputed in 19 (31%) of the 61 trials. Imputation was part of the primary analysis in 13 trials, but a sensitivity analysis in six. Last value carried forward was used in 12 trials and multiple imputation in two. Following imputation, the most common analysis method was analysis of covariance (10 trials). Conclusion: The majority of studies did not impute missing data and carried out a complete-case analysis. For those studies that did impute missing data, researchers tended to prefer simpler methods of imputation, despite more sophisticated methods being available.The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Government Health Directorate. Shona Fielding is also currently funded by the Chief Scientist Office on a Research Training Fellowship (CZF/1/31)

    The BpTRU automatic blood pressure monitor compared to 24 hour ambulatory blood pressure monitoring in the assessment of blood pressure in patients with hypertension

    Get PDF
    BACKGROUND: Increasing evidence suggests that ABPM more closely predicts target organ damage than does clinic measurement. Future guidelines may suggest ABPM as routine in the diagnosis and monitoring of hypertension. This would create difficulties as this test is expensive and often difficult to obtain. The purpose of this study is to determine the degree to which the BpTRU automatic blood pressure monitor predicts results on 24 hour ambulatory blood pressure monitoring (ABPM). METHODS: A quantitative analysis comparing blood pressure measured by the BpTRU device with the mean daytime blood pressure on 24 hour ABPM. The study was conducted by the Centre for Studies in Primary Care, Queen's University, Kingston, Ontario, Canada on adult primary care patients who are enrolled in two randomized controlled trials on hypertension. The main outcomes were the mean of the blood pressures measured at the three most recent office visits, the initial measurement on the BpTRU-100, the mean of the five measurements on the BpTRU monitor, and the daytime average on 24 hour ABPM. RESULTS: The group mean of the three charted clinic measured blood pressures (150.8 (SD10.26) / 82.9 (SD 8.44)) was not statistically different from the group mean of the initial reading on BpTRU (150.0 (SD21.33) / 83.3 (SD12.00)). The group mean of the average of five BpTRU readings (140.0 (SD17.71) / 79.8 (SD 10.46)) was not statistically different from the 24 hour daytime mean on ABPM (141.5 (SD 13.25) / 79.7 (SD 7.79)). Within patients, BpTRU average correlated significantly better with daytime ambulatory pressure than did clinic averages (BpTRU r = 0.571, clinic r = 0.145). Based on assessment of sensitivity and specificity at different cut-points, it is suggested that the initial treatment target using the BpTRU be set at <135/85 mmHG, but achievement of target should be confirmed using 24 hour ABPM. CONCLUSION: The BpTRU average better predicts ABPM than does the average of the blood pressures recorded on the patient chart from the three most recent visits. The BpTRU automatic clinic blood pressure monitor should be used as an adjunct to ABPM to effectively diagnose and monitor hypertension

    Sensitivity Analysis for Not-at-Random Missing Data in Trial-Based Cost-Effectiveness Analysis : A Tutorial

    Get PDF
    Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice

    Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives.</p> <p>Methods</p> <p>A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC) corrected for verification bias varying both the rate and mechanism of verification.</p> <p>Results</p> <p>In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5<sup>th </sup>– 97.5<sup>th </sup>centile range constituted as much as 60% of the possible range of AUCs for some simulations.</p> <p>Conclusion</p> <p>Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.</p

    A Kernel to Exploit Informative Missingness in Multivariate Time Series from EHRs

    Get PDF
    A large fraction of the electronic health records (EHRs) consists of clinical measurements collected over time, such as lab tests and vital signs, which provide important information about a patient's health status. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and large amounts of missing data, which complicate the analysis. In this work, we propose a novel kernel which is capable of exploiting both the information from the observed values as well the information hidden in the missing patterns in multivariate time series (MTS) originating e.g. from EHRs. The kernel, called TCKIM_{IM}, is designed using an ensemble learning strategy in which the base models are novel mixed mode Bayesian mixture models which can effectively exploit informative missingness without having to resort to imputation methods. Moreover, the ensemble approach ensures robustness to hyperparameters and therefore TCKIM_{IM} is particularly well suited if there is a lack of labels - a known challenge in medical applications. Experiments on three real-world clinical datasets demonstrate the effectiveness of the proposed kernel.Comment: 2020 International Workshop on Health Intelligence, AAAI-20. arXiv admin note: text overlap with arXiv:1907.0525

    Camels and Climate Resilience: Adaptation in Northern Kenya

    Get PDF
    In the drylands of Africa, pastoralists have been facing new challenges, including those related to environmental shocks and stresses. In northern Kenya, under conditions of reduced rainfall and more frequent droughts, one response has been for pastoralists to focus increasingly on camel herding. Camels have started to be kept at higher altitudes and by people who rarely kept camels before. The development has been understood as a climate change adaptation strategy and as a means to improve climate resilience. Since 2003, development organizations have started to further the trend by distributing camels in the region. Up to now, little has been known about the nature of, reasons for, or ramifications of the increased reliance on camels. The paper addresses these questions and concludes that camels improve resilience in this dryland region, but only under certain climate change scenarios, and only for some groups.This study was funded by The Royal Geographical Society with Institute of British Geographers Thesiger-Oman Fellowship
    corecore