27 research outputs found
The epidemiology of chronic kidney disease in sub-Saharan Africa: a systematic review and meta-analysis
Background Amid rapid urbanisation, the HIV epidemic, and increasing rates of non-communicable diseases, people
in sub-Saharan Africa are especially vulnerable to kidney disease. Little is known about the epidemiology of chronic
kidney disease (CKD) in sub-Saharan Africa, so we did a systematic review and meta-analysis examining the
epidemiology of the disease.
Methods We searched Medline, Embase, and WHO Global Health Library databases for all articles published through
March 29, 2012, and searched the reference lists of retrieved articles. We independently reviewed each study for
quality. We used the inverse-variance random-eff ects method for meta-analyses of the medium-quality and highquality
data and explored heterogeneity by comparing CKD burdens across countries, settings (urban or rural),
comorbid disorders (hypertension, diabetes, HIV), CKD defi nitions, and time.
Findings Overall, we included 90 studies from 96 sites in the review. Study quality was low, with only 18 (20%)
medium-quality studies and three (3%) high-quality studies. We noted moderate heterogeneity between the mediumquality
and high-quality studies (n=21; I²=47·11%, p<0·0009). Measurement of urine protein was the most common
method of determining the presence of kidney disease (62 [69%] studies), but the Cockcroft-Gault formula (22 [24%]
studies) and Modifi cation of Diet in Renal Disease formula (17 [19%] studies) were also used. Most of the studies
were done in urban settings (83 [93%] studies) and after the year 2000 (57 [63%] studies), and we detected no
signifi cant diff erence in the prevalence of CKD between urban (12·4%, 95% CI 11–14) and rural (16·5%, 13·8–19·6)
settings (p=0·474). The overall prevalence of CKD from the 21 medium-quality and high-quality studies was 13·9%
(95% CI 12·2–15·7).
Interpretation In sub-Saharan Africa, CKD is a substantial health burden with risk factors that include communicable
and non-communicable diseases. However, poor data quality limits inferences and draws attention to the need for
more information and validated measures of kidney function especially in the context of the growing burden of noncommunicable diseases
Absence of topological Hall effect in FeRh epitaxial films: revisiting their phase diagram
A series of FeRh () films were epitaxially
grown using magnetron sputtering, and were systematically studied by
magnetization-, electrical resistivity-, and Hall resistivity measurements.
After optimizing the growth conditions, phase-pure FeRh films
were obtained, and their magnetic phase diagram was revisited. The
ferromagnetic (FM) to antiferromagnetic (AFM) transition is limited at narrow
Fe-contents with in the bulk FeRh alloys. By
contrast, the FM-AFM transition in the FeRh films is extended to
cover a much wider range between 33 % and 53 %, whose critical temperature
slightly decreases as increasing the Fe-content. The resistivity jump and
magnetization drop at the FM-AFM transition are much more significant in the
FeRh films with 50 % Fe-content than in the Fe-deficient
films, the latter have a large amount of paramagnetic phase. The
magnetoresistivity (MR) is rather weak and positive in the AFM state, while it
becomes negative when the FM phase shows up, and a giant MR appears in the
mixed FM- and AFM states. The Hall resistivity is dominated by the ordinary
Hall effect in the AFM state, while in the mixed state or high-temperature FM
state, the anomalous Hall effect takes over. The absence of topological Hall
resistivity in FeRh films with various Fe-contents implies that
the previously observed topological Hall effect is most likely extrinsic. We
propose that the anomalous Hall effect caused by the FM iron moments at the
interfaces nicely explains the hump-like anomaly in the Hall resistivity. Our
systematic investigations may offer valuable insights into the spintronics
based on iron-rhodium alloys.Comment: 9 pages, 10 figures; accepted by Phys. Rev.
Recommended from our members
Can markers of disease severity improve the predictive power of claims-based multimorbidity indices?
BACKGROUND: Claims-based measures of multimorbidity, which evaluate the presence of a defined list of diseases, are limited in their ability to predict future outcomes. We evaluated whether claims-based markers of disease severity could improve assessments of multimorbid burden. METHODS: We developed 7 dichotomous markers of disease severity which could be applied to a range of diseases using claims data. These markers were based on the number of disease-associated outpatient visits, emergency department visits, and hospitalizations made by an individual over a defined interval; whether an individual with a given disease had outpatient visits to a specialist who typically treats that disease; and ICD-9 codes which connote more versus less advanced or symptomatic manifestations of a disease. Using Medicare claims linked with Health and Retirement Study data, we tested whether including these markers improved ability to predict ADL decline, IADL decline, hospitalization, and death compared to equivalent models which only included the presence or absence of diseases. RESULTS: Of 5012 subjects, median age was 76 years and 58% were female. For a majority of diseases tested individually, adding each of the 7 severity markers yielded minimal increase in c-statistic (≤0.002) for outcomes of ADL decline and mortality compared to models considering only the presence versus absence of disease. Gains in predictive power were more substantial for a small number of individual diseases. Inclusion of the most promising marker in multi-disease multimorbidity indices yielded minimal gains in c-statistics (<0.001-0.007) for predicting ADL decline, IADL decline, hospitalization, and death compared to indices without these markers. CONCLUSIONS: Claims-based markers of disease severity did not contribute meaningfully to the ability of multimorbidity indices to predict ADL decline, mortality, and other important outcomes
Recommended from our members
Fingerstick glucose monitoring by cognitive impairment status in Veterans Affairs nursing home residents with diabetes.
BACKGROUND: Guidelines recommend nursing home (NH) residents with cognitive impairment receive less intensive glycemic treatment and less frequent fingerstick monitoring. Our objective was to determine whether current practice aligns with guideline recommendations by examining fingerstick frequency in Veterans Affairs (VA) NH residents with diabetes across cognitive impairment levels. METHODS: We identified VA NH residents with diabetes aged ≥65 residing in VA NHs for >30 days between 2016 and 2019. Residents were grouped by cognitive impairment status based on the Cognitive Function Scale: cognitively intact, mild impairment, moderate impairment, and severe impairment. We also categorized residents into mutually exclusive glucose-lowering medication (GLM) categories: (1) no GLMs, (2) metformin only, (3) sulfonylureas/other GLMs (+/- metformin but no insulin), (4) long-acting insulin (+/- oral/other GLMs but no short-acting insulin), and (5) any short-acting insulin. Our outcome was mean daily fingersticks on day 31 of NH admission. RESULTS: Among 13,637 NH residents, mean age was 75 years and mean hemoglobin A1c was 7.0%. The percentage of NH residents on short-acting insulin varied by cognitive status from 22.7% in residents with severe cognitive impairment to 33.9% in residents who were cognitively intact. Mean daily fingersticks overall on day 31 was 1.50 (standard deviation = 1.73). There was a greater range in mean fingersticks across GLM categories compared to cognitive status. Fingersticks ranged widely across GLM categories from 0.39 per day (no GLMs) to 3.08 (short-acting insulin), while fingersticks ranged slightly across levels of cognitive impairment from 1.11 (severe cognitive impairment) to 1.59 (cognitively intact). CONCLUSION: NH residents receive frequent fingersticks regardless of level of cognitive impairment, suggesting that cognitive status is a minor consideration in monitoring decisions. Future studies should determine whether decreasing fingersticks in NH residents with moderate/severe cognitive impairment can reduce burdens without compromising safety
Glycemic treatment deintensification practices in nursing home residents with type 2 diabetes.
BackgroundOlder nursing home (NH) residents with glycemic overtreatment are at significant risk of hypoglycemia and other harms and may benefit from deintensification. However, little is known about deintensification practices in this setting.MethodsWe conducted a cohort study from January 1, 2013 to December 31, 2019 among Veterans Affairs (VA) NH residents. Participants were VA NH residents age ≥65 with type 2 diabetes with a NH length of stay (LOS) ≥ 30 days and an HbA1c result during their NH stay. We defined overtreatment as HbA1c <6.5 with any insulin use, and potential overtreatment as HbA1c <7.5 with any insulin use or HbA1c <6.5 on any glucose-lowering medication (GLM) other than metformin alone. Our primary outcome was continued glycemic overtreatment without deintensification 14 days after HbA1c.ResultsOf the 7422 included residents, 17% of residents met criteria for overtreatment and an additional 23% met criteria for potential overtreatment. Among residents overtreated and potentially overtreated at baseline, 27% and 19%, respectively had medication regimens deintensified (73% and 81%, respectively, continued to be overtreated). Long-acting insulin use and hyperglycemia ≥300 mg/dL before index HbA1c were associated with increased odds of continued overtreatment (odds ratio [OR] 1.37, 95% confidence interval [CI] 1.14-1.65 and OR 1.35, 95% CI 1.10-1.66, respectively). Severe functional impairment (MDS-ADL score ≥ 19) was associated with decreased odds of continued overtreatment (OR 0.72, 95% CI 0.56-0.95). Hypoglycemia was not associated with decreased odds of overtreatment.ConclusionsOvertreatment of diabetes in NH residents is common and a minority of residents have their medication regimens appropriately deintensified. Deprescribing initiatives targeting residents at high risk of harms and with low likelihood of benefit such as those with history of hypoglycemia, or high levels of cognitive or functional impairment are most likely to identify NH residents most likely to benefit from deintensification
Predicting Life Expectancy to Target Cancer Screening Using Electronic Health Record Clinical Data
BackgroundGuidelines recommend breast and colorectal cancer screening for older adults with a life expectancy >10 years. Most mortality indexes require clinician data entry, presenting a barrier for routine use in care. Electronic health records (EHR) are a rich clinical data source that could be used to create individualized life expectancy predictions to identify patients for cancer screening without data entry.ObjectiveTo develop and internally validate a life expectancy calculator from structured EHR data.DesignRetrospective cohort study using national Veteran's Affairs (VA) EHR databases.PatientsVeterans aged 50+ with a primary care visit during 2005.Main measuresWe assessed demographics, diseases, medications, laboratory results, healthcare utilization, and vital signs 1 year prior to the index visit. Mortality follow-up was complete through 2017. Using the development cohort (80% sample), we used LASSO Cox regression to select ~100 predictors from 913 EHR data elements. In the validation cohort (remaining 20% sample), we calculated the integrated area under the curve (iAUC) and evaluated calibration.Key resultsIn 3,705,122 patients, the mean age was 68 years and the majority were male (97%) and white (85%); nearly half (49%) died. The life expectancy calculator included 93 predictors; age and gender most strongly contributed to discrimination; diseases also contributed significantly while vital signs were negligible. The iAUC was 0.816 (95% confidence interval, 0.815, 0.817) with good calibration.ConclusionsWe developed a life expectancy calculator using VA EHR data with excellent discrimination and calibration. Automated life expectancy prediction using EHR data may improve guideline-concordant breast and colorectal cancer screening by identifying patients with a life expectancy >10 years
Recommended from our members
Development and validation of novel multimorbidity indices for older adults.
BACKGROUND: Measuring multimorbidity in claims data is used for risk adjustment and identifying populations at high risk for adverse events. Multimorbidity indices such as Charlson and Elixhauser scores have important limitations. We sought to create a better method of measuring multimorbidity using claims data by incorporating geriatric conditions, markers of disease severity, and disease-disease interactions, and by tailoring measures to different outcomes. METHODS: Health conditions were assessed using Medicare inpatient and outpatient claims from subjects age 67 and older in the Health and Retirement Study. Separate indices were developed for ADL decline, IADL decline, hospitalization, and death, each over 2 years of follow-up. We validated these indices using data from Medicare claims linked to the National Health and Aging Trends Study. RESULTS: The development cohort included 5012 subjects with median age 76 years; 58% were female. Claims-based markers of disease severity and disease-disease interactions yielded minimal gains in predictive power and were not included in the final indices. In the validation cohort, after adjusting for age and sex, c-statistics for the new multimorbidity indices were 0.72 for ADL decline, 0.69 for IADL decline, 0.72 for hospitalization, and 0.77 for death. These c-statistics were 0.02-0.03 higher than c-statistics from Charlson and Elixhauser indices for predicting ADL decline, IADL decline, and hospitalization, and <0.01 higher for death (p < 0.05 for each outcome except death), and were similar to those from the CMS-HCC model. On decision curve analysis, the new indices provided minimal benefit compared with legacy approaches. C-statistics for both new and legacy indices varied substantially across derivation and validation cohorts. CONCLUSIONS: A new series of claims-based multimorbidity measures were modestly better at predicting hospitalization and functional decline than several legacy indices, and no better at predicting death. There may be limited opportunity in claims data to measure multimorbidity better than older methods