4 research outputs found

    Racial differences in prediabetes prevalence by test type for the US pediatric and adult population: NHANES 1999‐2016

    Full text link
    BackgroundPrevious studies have shown that US estimates of prediabetes or diabetes differ depending on test type, fasting plasma glucose (FPG) vs hemoglobin A1c (HbA1c). Given age, race, and test differences reported in the literature, we sought to further examine these differences in prediabetes detection using a nationally representative sample.MethodsUsing the National Health and Nutrition Examination Survey (NHANES) 1999‐2016, individuals were identified as having prediabetes with an HbA1c of 5.7% to 6.4% or a FPG of 100 to 125 mg/dL. We excluded individuals with measurements in the diabetic range. We ran generalized estimating equation logistic regressions to examine the relationship between age, race, and test type with interactions, controlling for sex and body mass index. We compared the difference in predicted prediabetes prevalence detected by impaired fasting glycemia (IFG) vs HbA1c by race/ethnicity among children and adults separately using adjusted Wald tests.ResultsThe absolute difference in predicted prediabetes detected by IFG vs HbA1c was 19.9% for white adolescents, 0% for black adolescents, and 20.1% for Hispanic adolescents; 21.4% for white adults, −1.2% for black adults, and 19.2% for Hispanic adults. Using adjusted Wald tests, we found the absolute differences between black vs white and black vs Hispanic individuals to be significant, but, not between Hispanic and white individuals among children and adults separately.ConclusionsThese observations highlight differences in test performance among racial/ethnic groups. Our findings corroborate the need for further studies to determine appropriate HbA1c cutoff levels for diagnosis of prediabetes by age group and race.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163459/2/pedi13083_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163459/1/pedi13083.pd

    Contributions of high- and low-quality patches to a metapopulation with stochastic disturbance

    Get PDF
    © The Author(s), 2010. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Theoretical Ecology 5 (2012): 167-179, doi:10.1007/s12080-010-0106-9.Studies of time-invariant matrix metapopulation models indicate that metapopulation growth rate is usually more sensitive to the vital rates of individuals in high-quality (i.e., good) patches than in low-quality (i.e., bad) patches. This suggests that, given a choice, management efforts should focus on good rather than bad patches. Here, we examine the sensitivity of metapopulation growth rate for a two-patch matrix metapopulation model with and without stochastic disturbance and found cases where managers can more efficiently increase metapopulation growth rate by focusing efforts on the bad patch. In our model, net reproductive rate differs between the two patches so that in the absence of dispersal, one patch is high quality and the other low quality. Disturbance, when present, reduces net reproductive rate with equal frequency and intensity in both patches. The stochastic disturbance model gives qualitatively similar results to the deterministic model. In most cases, metapopulation growth rate was elastic to changes in net reproductive rate of individuals in the good patch than the bad patch. However, when the majority of individuals are located in the bad patch, metapopulation growth rate can be most elastic to net reproductive rate in the bad patch. We expand the model to include two stages and parameterize the patches using data for the softshell clam, Mya arenaria. With a two-stage demographic model, the elasticities of metapopulation growth rate to parameters in the bad patch increase, while elasticities to the same parameters in the good patch decrease. Metapopulation growth rate is most elastic to adult survival in the population of the good patch for all scenarios we examine. If the majority of the metapopulation is located in the bad patch, the elasticity to parameters of that population increase but do not surpass elasticity to parameters in the good patch. This model can be expanded to include additional patches, multiple stages, stochastic dispersal, and complex demography.Financial support was provided by the Woods Hole Oceanographic Institution Academic Programs Office; National Science Foundation grants OCE-0326734, OCE- 0215905, OCE-0349177, DEB-0235692, DEB-0816514, DMS- 0532378, OCE-1031256, and ATM-0428122; and by National Oceanic and Atmospheric Administration National Sea Grant College Program Office, Department of Commerce, under Grant No. NA86RG0075 (Woods Hole Oceanographic Institution Sea Grant Project No. R/0-32), and Grant No. NA16RG2273 (Woods Hole Oceanographic Institution Sea Grant Project No. R/0-35)

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Effect of Antiplatelet Therapy on Survival and Organ Support–Free Days in Critically Ill Patients With COVID-19

    No full text
    International audienc
    corecore