42 research outputs found

    Systematic review and metaanalysis comparing the bias and accuracy of the modification of diet in renal disease and chronic kidney disease epidemiology collaboration equations in community-based population

    Get PDF
    BACKGROUND The majority of patients with chronic kidney disease are diagnosed and monitored in primary care. Glomerular filtration rate (GFR) is a key marker of renal function, but direct measurement is invasive; in routine practice, equations are used for estimated GFR (eGFR) from serum creatinine. We systematically assessed bias and accuracy of commonly used eGFR equations in populations relevant to primary care. CONTENT MEDLINE, EMBASE, and the Cochrane Library were searched for studies comparing measured GFR (mGFR) with eGFR in adult populations comparable to primary care and reporting both the Modification of Diet in Renal Disease (MDRD) and the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations based on standardized creatinine measurements. We pooled data on mean bias (difference between eGFR and mGFR) and on mean accuracy (proportion of eGFR within 30% of mGFR) using a random-effects inverse-variance weighted metaanalysis. We included 48 studies of 26875 patients that reported data on bias and/or accuracy. Metaanalysis of within-study comparisons in which both formulae were tested on the same patient cohorts using isotope dilution-mass spectrometry-traceable creatinine showed a lower mean bias in eGFR using CKD-EPI of 2.2 mL/min/1.73 m2 (95% CI, 1.1–3.2; 30 studies; I2 = 74.4%) and a higher mean accuracy of CKD-EPI of 2.7% (1.6–3.8; 47 studies; I2 = 55.5%). Metaregression showed that in both equations bias and accuracy favored the CKD-EPI equation at higher mGFR values. SUMMARY Both equations underestimated mGFR, but CKD-EPI gave more accurate estimates of GFR

    Trends in kidney function testing in UK primary care since the introduction of the Quality and Outcomes Framework:a retrospective cohort study using CPRD

    Get PDF
    Objectives: To characterise serum creatinine and urinary protein testing in UK general practices from 2005 to 2013 and to examine how the frequency of testing varies across demographic factors, with the presence of chronic conditions and with the prescribing of drugs for which kidney function monitoring is recommended. Design: Retrospective open cohort study. Setting: Routinely collected data from 630 UK general practices contributing to the Clinical Practice Research Datalink. Participants: 4 573 275 patients aged over 18 years registered at up-to-standard practices between 1 April 2005 and 31 March 2013. At study entry, no patients were kidney transplant donors or recipients, pregnant or on dialysis. Primary outcome measures: The rate of serum creatinine and urinary protein testing per year and the percentage of patients with isolated and repeated testing per year. Results: The rate of serum creatinine testing increased linearly across all age groups. The rate of proteinuria testing increased sharply in the 2009–2010 financial year but only for patients aged 60 years or over. For patients with established chronic kidney disease (CKD), creatinine testing increased rapidly in 2006–2007 and 2007–2008, and proteinuria testing in 2009–2010, reflecting the introduction of Quality and Outcomes Framework indicators. In adjusted analyses, CKD Read codes were associated with up to a twofold increase in the rate of serum creatinine testing, while other chronic conditions and potentially nephrotoxic drugs were associated with up to a sixfold increase. Regional variation in serum creatinine testing reflected country boundaries. Conclusions: Over a nine-year period, there have been increases in the numbers of patients having kidney function tests annually and in the frequency of testing. Changes in the recommended management of CKD in primary care were the primary determinant, and increases persist even after controlling for demographic and patient-level factors. Future studies should address whether increased testing has led to better outcomes.</p

    Social inequalities in self-rated health by age: Cross-sectional study of 22 457 middle-aged men and women

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We investigate the association between occupational social class and self-rated health (SRH) at different ages in men and women.</p> <p>Methods</p> <p>Cross sectional population study of 22 457 men and women aged 39–79 years living in the general community in Norfolk, United Kingdom, recruited using general practice age-sex registers in 1993–1997. The relationship between self-rated health and social class was examined using logistic regression, with a poor or moderate rating as the outcome.</p> <p>Results</p> <p>The prevalence of poor or moderate (lower) self-rated health increased with increasing age in both men and women. There was a strong social class gradient: in manual classes, men and women under 50 years of age had a prevalence of lower self-rated health similar to that seen in men and women in non-manual social classes over 70 years old. Even after adjustment for age, educational status, and lifestyle factors (body mass index (BMI), smoking, physical activity and alcohol consumption) there was still strong evidence of a social gradient in self-rated health, with unskilled men and women approximately twice as likely to report lower self-rated health as professionals (OR<sub>men </sub>= 2.44 (95%CI 1.69, 3.50); OR<sub>women </sub>= 1.97 (95%CI 1.45, 2.68).</p> <p>Conclusion</p> <p>There was a strong gradient of decreased SRH with age in both men and women. We found a strong cross-sectional association between SRH and social class, which was independent of education and major health related behaviors. The social class differential in SRH was similar with age. Prospective studies to confirm this association should explore social and emotional as well as physical pathways to inequalities in self reported health.</p

    Influence of seasonality and vegetation type on suburban microclimates

    Full text link
    Urbanization is responsible for some of the fastest rates of land-use change around the world, with important consequences for local, regional, and global climate. Vegetation, which represents a significant proportion of many urban and suburban landscapes, can modify climate by altering local exchanges of heat, water vapor, and CO2. To determine how distinct urban forest communities vary in their microclimate effects over time, we measured stand-level leaf area index, soil temperature, infrared surface temperature, and soil water content over a complete growing season at 29 sites representing the five most common vegetation types in a suburban neighborhood of Minneapolis–Saint Paul, Minnesota. We found that seasonal patterns of soil and surface temperatures were controlled more by differences in stand-level leaf area index and tree cover than by plant functional type. Across the growing season, sites with high leaf area index had soil temperatures that were 7°C lower and surface temperatures that were 6°C lower than sites with low leaf area index. Site differences in mid-season soil temperature and turfgrass ground cover were best explained by leaf area index, whereas differences in mid-season surface temperature were best explained by percent tree cover. The significant cooling effects of urban tree canopies on soil temperature imply that seasonal changes in leaf area index may also modulate CO2 efflux from urban soils, a highly temperature-dependent process, and that this should be considered in calculations of total CO2 efflux for urban carbon budgets. Field-based estimates of percent tree cover were found to better predict mid-season leaf area index than satellite-derived estimates and consequently offer an approach to scale up urban biophysical properties

    TRY plant trait database – enhanced coverage and open access

    Get PDF
    Plant traits - the morphological, anatomical, physiological, biochemical and phenological characteristics of plants - determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits - almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives

    Proceedings of the Thirteenth International Society of Sports Nutrition (ISSN) Conference and Expo

    Get PDF
    Meeting Abstracts: Proceedings of the Thirteenth International Society of Sports Nutrition (ISSN) Conference and Expo Clearwater Beach, FL, USA. 9-11 June 201

    Standard and competing risk analysis of the effect of albuminuria on cardiovascular and cancer mortality in patients with type 2 diabetes mellitus

    No full text
    Abstract Background Competing risks occur when populations may experience outcomes that either preclude or alter the probability of experiencing the main study outcome(s). Many standard survival analysis methods do not account for competing risks. We used mortality risk in people with diabetes with and without albuminuria as a case study to investigate the impact of competing risks on measures of absolute and relative risk. Methods A population with type 2 diabetes was identified in Clinical Practice Research Datalink as part of a historical cohort study. Patients were followed for up to 9 years. To quantify differences in absolute risk estimates of cardiovascular and cancer, mortality standard (Kaplan-Meier) estimates were compared to competing-risks-adjusted (cumulative incidence competing risk) estimates. To quantify differences in measures of association, regression coefficients for the effect of albuminuria on the relative hazard of each outcome were compared between standard cause-specific hazard (CSH) models (Cox proportional hazards regression) and two competing risk models: the unstratified Lunn-McNeil model, which estimates CSH, and the Fine-Gray model, which estimates subdistribution hazard (SDH). Results In patients with normoalbuminuria, standard and competing-risks-adjusted estimates for cardiovascular mortality were 11.1% (95% confidence interval (CI) 10.8–11.5%) and 10.2% (95% CI 9.9–10.5%), respectively. For cancer mortality, these figures were 8.0% (95% CI 7.7–8.3%) and 7.2% (95% CI 6.9–7.5%). In patients with albuminuria, standard and competing-risks-adjusted estimates for cardiovascular mortality were 21.8% (95% CI 20.9–22.7%) and 18.5% (95% CI 17.8–19.3%), respectively. For cancer mortality, these figures were 10.7% (95% CI 10.0–11.5%) and 8.6% (8.1–9.2%). For the effect of albuminuria on cardiovascular mortality, regression coefficient values from multivariable standard CSH, competing risks CSH, and competing risks SDH models were 0.557 (95% CI 0.491–0.623), 0.561 (95% CI 0.494–0.628), and 0.456 (95% CI 0.389–0.523), respectively. For the effect of albuminuria on cancer mortality, these values were 0.237 (95% CI 0.148–0.326), 0.244 (95% CI 0.154–0.333), and 0.102 (95% CI 0.012–0.192), respectively. Conclusions Studies of absolute risk should use methods that adjust for competing risks to avoid over-stating risk, such as the CICR estimator. Studies of relative risk should consider carefully which measure of association is most appropriate for the research question

    Blood eosinophils to guide inhaled maintenance therapy in a primary care COPD population

    No full text
    Blood eosinophils are a potentially useful biomarker for guiding inhaled corticosteroid (ICS) treatment decisions in COPD. We investigated whether existing blood eosinophil counts predict benefit from initiation of ICS compared to bronchodilator therapy. We used routinely collected data from UK primary care in the Clinical Practice Research Datalink. Participants were aged ≄40 years with COPD, were ICS-naĂŻve and starting a new inhaled maintenance medication (intervention group: ICS; comparator group: long-acting bronchodilator, non-ICS). Primary outcome was time to first exacerbation, compared between ICS and non-ICS groups, stratified by blood eosinophils (“high” ≄150 cells·”L(−1) and “low” <150 cells·”L(−1)). Out of 9475 eligible patients, 53.9% initiated ICS and 46.1% non-ICS treatment with no difference in eosinophils between treatment groups (p=0.71). Exacerbation risk was higher in patients prescribed ICS than those prescribed non-ICS treatment, but with a lower risk in those with high eosinophils (hazard ratio (HR) 1.04, 95% CI 0.98–1.10) than low eosinophils (HR 1.19, 95% CI 1.09–1.31) (p-value for interaction 0.01). Risk of pneumonia hospitalisation with ICS was greatest in those with low eosinophils (HR 1.26, 95% CI 1.05–1.50; p-value for interaction 0.04). Results were similar whether the most recent blood eosinophil count or the mean of blood eosinophil counts was used. In a primary care population, the most recent blood eosinophil count could be used to guide initiation of ICS in COPD patients. We suggest that ICS should be considered in those with higher eosinophils and avoided in those with lower eosinophils (<150 cells·”L(−1))
    corecore