21 research outputs found

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages

    Agreement between dried blood spots and HemoCue in Tamil Nadu, India

    No full text
    India retains the world’s largest burden of anemia despite decades of economic growth and anemia prevention programming. Accurate screening and estimates of anemia prevalence are critical for successful anemia control. Evidence is mixed on the performance of HemoCue, a point-of-care testing device most widely used for large-scale surveys. The use of dried blood spots (DBS) to assess hemoglobin (Hb) concentration is a potential alternative, particularly in field settings. The objective of this study is to assess Hb measurement agreement between capillary HemoCue and DBS among two age groups, children 6–59 months and females age 12–40 years. We analyzed data from the baseline round of a cluster randomized rice fortification intervention in Cuddalore district of Tamil Nadu, India. Capillary blood was collected from a subset of participants for Hb assessment by HemoCue 301 and DBS methods. We calculated Lin’s concordance correlation coefficient, and tested bias by conducting paired t-tests of Hb concentration. Independence of the bias and Hb magnitude was examined visually using Bland–Altman plots and statistically tested by Pearson’s correlation. We assessed differences in anemia classification using McNemar’s test of marginal homogeneity. Concordance between HemoCue and DBS Hb measures was moderate for both children 6–59 months (ρc = 0.67; 95% CI 0.65, 0.71) and females 12–40 years (ρc = 0.67: 95% CI 0.64, 0.69). HemoCue measures were on average 0.06 g/dL higher than DBS for children (95% CI 0.002, 0.12; p = 0.043) and 0.29 g/dL lower than DBS for females (95% CI − 0.34, − 0.23; p < 0.0001). 50% and 56% of children were classified as anemic according to HemoCue and DBS, respectively (p < 0.0001). 55% and 47% of females were classified as anemic according to HemoCue and DBS, respectively (p < 0.0001). There is moderate statistical agreement of Hb concentration between HemoCue and DBS for both age groups. The choice of Hb assessment method has important implications for individual anemia diagnosis and population prevalence estimates. Further research is required to understand factors that influence the accuracy and reliability of DBS as a methodology for Hb assessment

    Immunocompromised patients with acute respiratory distress syndrome: Secondary analysis of the LUNG SAFE database

    Get PDF
    Background: The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p &lt; 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p &lt; 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013

    Contributory presentations/posters

    No full text

    Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis

    No full text
    International audienceTwo acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS. Methods: In this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable. Findings: The primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90–0·95) in EARLI and 0·88 (0·84–0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81–0·94] vs 0·92 [0·88–0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group). Interpretation: Classifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated. Funding: US National Institutes of Health and European Society of Intensive Care Medicine
    corecore