24 research outputs found
Epidemiology, patterns of care, and mortality for patients with acute respiratory distress syndrome in intensive care units in 50 countries
IMPORTANCE: Limited information exists about the epidemiology, recognition, management, and outcomes of patients with the acute respiratory distress syndrome (ARDS).
OBJECTIVES: To evaluate intensive care unit (ICU) incidence and outcome of ARDS and to assess clinician recognition, ventilation management, and use of adjuncts-for example prone positioning-in routine clinical practice for patients fulfilling the ARDS Berlin Definition.
DESIGN, SETTING, AND PARTICIPANTS:The Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) was an international, multicenter, prospective cohort study of patients undergoing invasive or noninvasive ventilation, conducted during 4 consecutive weeks in the winter of 2014 in a convenience sample of 459 ICUs from 50 countries across 5 continents.
EXPOSURES:Acute respiratory distress syndrome.
MAIN OUTCOMES AND MEASURES: The primary outcome was ICU incidence of ARDS. Secondary outcomes included assessment of clinician recognition of ARDS, the application of ventilatory management, the use of adjunctive interventions in routine clinical practice, and clinical outcomes from ARDS.
RESULTS: Of 29,144 patients admitted to participating ICUs, 3022 (10.4%) fulfilled ARDS criteria. Of these, 2377 patients developed ARDS in the first 48 hours and whose respiratory failure was managed with invasive mechanical ventilation. The period prevalence of mild ARDS was 30.0% (95% CI, 28.2%-31.9%); of moderate ARDS, 46.6% (95% CI, 44.5%-48.6%); and of severe ARDS, 23.4% (95% CI, 21.7%-25.2%). ARDS represented 0.42 cases per ICU bed over 4 weeks and represented 10.4% (95% CI, 10.0%-10.7%) of ICU admissions and 23.4% of patients requiring mechanical ventilation. Clinical recognition of ARDS ranged from 51.3% (95% CI, 47.5%-55.0%) in mild to 78.5% (95% CI, 74.8%-81.8%) in severe ARDS. Less than two-thirds of patients with ARDS received a tidal volume 8 of mL/kg or less of predicted body weight. Plateau pressure was measured in 40.1% (95% CI, 38.2-42.1), whereas 82.6% (95% CI, 81.0%-84.1%) received a positive end-expository pressure (PEEP) of less than 12 cm H2O. Prone positioning was used in 16.3% (95% CI, 13.7%-19.2%) of patients with severe ARDS. Clinician recognition of ARDS was associated with higher PEEP, greater use of neuromuscular blockade, and prone positioning. Hospital mortality was 34.9% (95% CI, 31.4%-38.5%) for those with mild, 40.3% (95% CI, 37.4%-43.3%) for those with moderate, and 46.1% (95% CI, 41.9%-50.4%) for those with severe ARDS.
CONCLUSIONS AND RELEVANCE: Among ICUs in 50 countries, the period prevalence of ARDS was 10.4% of ICU admissions. This syndrome appeared to be underrecognized and undertreated and associated with a high mortality rate. These findings indicate the potential for improvement in the management of patients with ARDS
Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome : Insights from the LUNG SAFE study
Publisher Copyright: © 2020 The Author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ≥ 0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT02010073publishersversionPeer reviewe
Immunocompromised patients with acute respiratory distress syndrome : Secondary analysis of the LUNG SAFE database
The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p < 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p < 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013
Immunocompromised patients with acute respiratory distress syndrome: Secondary analysis of the LUNG SAFE database
Background: The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p < 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p < 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013
Recommended from our members
Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis
BackgroundTwo acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS.MethodsIn this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable.FindingsThe primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90-0·95) in EARLI and 0·88 (0·84-0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81-0·94] vs 0·92 [0·88-0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group).InterpretationClassifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated.FundingUS National Institutes of Health and European Society of Intensive Care Medicine
Epidemiology of intra-abdominal infection and sepsis in critically ill patients: "AbSeS", a multinational observational cohort study and ESICM Trials Group Project
Purpose
To describe the epidemiology of intra-abdominal infection in an international cohort of ICU patients according to a new system that classifies cases according to setting of infection acquisition (community-acquired, early onset hospital-acquired, and late-onset hospital-acquired), anatomical disruption (absent or present with localized or diffuse peritonitis), and severity of disease expression (infection, sepsis, and septic shock).
Methods
We performed a multicenter (n = 309), observational, epidemiological study including adult ICU patients diagnosed with intra-abdominal infection. Risk factors for mortality were assessed by logistic regression analysis.
Results
The cohort included 2621 patients. Setting of infection acquisition was community-acquired in 31.6%, early onset hospital-acquired in 25%, and late-onset hospital-acquired in 43.4% of patients. Overall prevalence of antimicrobial resistance was 26.3% and difficult-to-treat resistant Gram-negative bacteria 4.3%, with great variation according to geographic region. No difference in prevalence of antimicrobial resistance was observed according to setting of infection acquisition. Overall mortality was 29.1%. Independent risk factors for mortality included late-onset hospital-acquired infection, diffuse peritonitis, sepsis, septic shock, older age, malnutrition, liver failure, congestive heart failure, antimicrobial resistance (either methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, extended-spectrum beta-lactamase-producing Gram-negative bacteria, or carbapenem-resistant Gram-negative bacteria) and source control failure evidenced by either the need for surgical revision or persistent inflammation.
Conclusion
This multinational, heterogeneous cohort of ICU patients with intra-abdominal infection revealed that setting of infection acquisition, anatomical disruption, and severity of disease expression are disease-specific phenotypic characteristics associated with outcome, irrespective of the type of infection. Antimicrobial resistance is equally common in community-acquired as in hospital-acquired infection
Epidemiology of intra-abdominal infection and sepsis in critically ill patients: "AbSeS", a multinational observational cohort study and ESICM Trials Group Project
PURPOSE: To describe the epidemiology of intra-abdominal infection in an international cohort of ICU patients according to a new system that classifies cases according to setting of infection acquisition (community-acquired, early onset hospital-acquired, and late-onset hospital-acquired), anatomical disruption (absent or present with localized or diffuse peritonitis), and severity of disease expression (infection, sepsis, and septic shock). METHODS: We performed a multicenter (n = 309), observational, epidemiological study including adult ICU patients diagnosed with intra-abdominal infection. Risk factors for mortality were assessed by logistic regression analysis. RESULTS: The cohort included 2621 patients. Setting of infection acquisition was community-acquired in 31.6%, early onset hospital-acquired in 25%, and late-onset hospital-acquired in 43.4% of patients. Overall prevalence of antimicrobial resistance was 26.3% and difficult-to-treat resistant Gram-negative bacteria 4.3%, with great variation according to geographic region. No difference in prevalence of antimicrobial resistance was observed according to setting of infection acquisition. Overall mortality was 29.1%. Independent risk factors for mortality included late-onset hospital-acquired infection, diffuse peritonitis, sepsis, septic shock, older age, malnutrition, liver failure, congestive heart failure, antimicrobial resistance (either methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, extended-spectrum beta-lactamase-producing Gram-negative bacteria, or carbapenem-resistant Gram-negative bacteria) and source control failure evidenced by either the need for surgical revision or persistent inflammation. CONCLUSION: This multinational, heterogeneous cohort of ICU patients with intra-abdominal infection revealed that setting of infection acquisition, anatomical disruption, and severity of disease expression are disease-specific phenotypic characteristics associated with outcome, irrespective of the type of infection. Antimicrobial resistance is equally common in community-acquired as in hospital-acquired infection.status: publishe
Antimicrobial Lessons From a Large Observational Cohort on Intra-abdominal Infections in Intensive Care Units
evere intra-abdominal infection commonly requires intensive care. Mortality is high and is mainly determined by disease-specific characteristics, i.e. setting of infection onset, anatomical barrier disruption, and severity of disease expression. Recent observations revealed that antimicrobial resistance appears equally common in community-acquired and late-onset hospital-acquired infection. This challenges basic principles in anti-infective therapy guidelines, including the paradigm that pathogens involved in community-acquired infection are covered by standard empiric antimicrobial regimens, and second, the concept of nosocomial acquisition as the main driver for resistance involvement. In this study, we report on resistance profiles of Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Enterococcus faecalis and Enterococcus faecium in distinct European geographic regions based on an observational cohort study on intra-abdominal infections in intensive care unit (ICU) patients. Resistance against aminopenicillins, fluoroquinolones, and third-generation cephalosporins in E. coli, K. pneumoniae and P. aeruginosa is problematic, as is carbapenem-resistance in the latter pathogen. For E. coli and K. pneumoniae, resistance is mainly an issue in Central Europe, Eastern and South-East Europe, and Southern Europe, while resistance in P. aeruginosa is additionally problematic in Western Europe. Vancomycin-resistance in E. faecalis is of lesser concern but requires vigilance in E. faecium in Central and Eastern and South-East Europe. In the subcohort of patients with secondary peritonitis presenting with either sepsis or septic shock, the appropriateness of empiric antimicrobial therapy was not associated with mortality. In contrast, failure of source control was strongly associated with mortality. The relevance of these new insights for future recommendations regarding empiric antimicrobial therapy in intra-abdominal infections is discussed.Severe intra-abdominal infection commonly requires intensive care. Mortality is high and is mainly determined by diseasespecific characteristics, i.e. setting of infection onset, anatomical barrier disruption, and severity of disease expression. Recent observations revealed that antimicrobial resistance appears equally common in community-acquired and late-onset hospital-acquired infection. This challenges basic principles in anti-infective therapy guidelines, including the paradigm that pathogens involved in community-acquired infection are covered by standard empiric antimicrobial regimens, and second, the concept of nosocomial acquisition as the main driver for resistance involvement. In this study, we report on resistance profiles of Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Enterococcus faecalis and Enterococcus faecium in distinct European geographic regions based on an observational cohort study on intra-abdominal infections in intensive care unit (ICU) patients. Resistance against aminopenicillins, fluoroquinolones, and third-generation cephalosporins in E. coli, K. pneumoniae and P. aeruginosa is problematic, as is carbapenem-resistance in the latter pathogen. For E. coli and K. pneumoniae, resistance is mainly an issue in Central Europe, Eastern and South-East Europe, and Southern Europe, while resistance in P. aeruginosa is additionally problematic in Western Europe. Vancomycin-resistance in E. faecalis is of lesser concern but requires vigilance in E. faecium in Central and Eastern and South-East Europe. In the subcohort of patients with secondary peritonitis presenting with either sepsis or septic shock, the appropriateness of empiric antimicrobial therapy was not associated with mortality. In contrast, failure of source control was strongly associated with mortality. The relevance of these new insights for future recommendations regarding empiric antimicrobial therapy in intra-abdominal infections is discussed
Poor timing and failure of source control are risk factors for mortality in critically ill patients with secondary peritonitis
Purpose: To describe data on epidemiology, microbiology, clinical characteristics and outcome of adult patients admitted in the intensive care unit (ICU) with secondary peritonitis, with special emphasis on antimicrobial therapy and source control.
Methods: Post hoc analysis of a multicenter observational study (Abdominal Sepsis Study, AbSeS) including 2621 adult ICU patients with intra-abdominal infection in 306 ICUs from 42 countries. Time-till-source control intervention was calculated as from time of diagnosis and classified into 'emergency' (< 2 h), 'urgent' (2-6 h), and 'delayed' (> 6 h). Relationships were assessed by logistic regression analysis and reported as odds ratios (OR) and 95% confidence interval (CI).
Results: The cohort included 1077 cases of microbiologically confirmed secondary peritonitis. Mortality was 29.7%. The rate of appropriate empiric therapy showed no difference between survivors and non-survivors (66.4% vs. 61.3%, p = 0.1). A stepwise increase in mortality was observed with increasing Sequential Organ Failure Assessment (SOFA) scores (19.6% for a value ≤ 4-55.4% for a value > 12, p < 0.001). The highest odds of death were associated with septic shock (OR 3.08 [1.42-7.00]), late-onset hospital-acquired peritonitis (OR 1.71 [1.16-2.52]) and failed source control evidenced by persistent inflammation at day 7 (OR 5.71 [3.99-8.18]). Compared with 'emergency' source control intervention (< 2 h of diagnosis), 'urgent' source control was the only modifiable covariate associated with lower odds of mortality (OR 0.50 [0.34-0.73]).
Conclusion: 'Urgent' and successful source control was associated with improved odds of survival. Appropriateness of empirical antimicrobial treatment did not significantly affect survival suggesting that source control is more determinative for outcome
Outcomes of Patients Presenting with Mild Acute Respiratory Distress Syndrome Insights from the LUNG SAFE Study
BACKGROUND: Patients with initial mild acute respiratory distress syndrome are often underrecognized and mistakenly considered to have low disease severity and favorable outcomes. They represent a relatively poorly characterized population that was only classified as having acute respiratory distress syndrome in the most recent definition. Our primary objective was to describe the natural course and the factors associated with worsening and mortality in this population. METHODS: This study analyzed patients from the international prospective Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) who had initial mild acute respiratory distress syndrome in the first day of inclusion. This study defined three groups based on the evolution of severity in the first week: "worsening" if moderate or severe acute respiratory distress syndrome criteria were met, "persisting" if mild acute respiratory distress syndrome criteria were the most severe category, and "improving" if patients did not fulfill acute respiratory distress syndrome criteria any more from day 2. RESULTS: Among 580 patients with initial mild acute respiratory distress syndrome, 18% (103 of 580) continuously improved, 36% (210 of 580) had persisting mild acute respiratory distress syndrome, and 46% (267 of 580) worsened in the first week after acute respiratory distress syndrome onset. Global in-hospital mortality was 30% (172 of 576; specifically 10% [10 of 101], 30% [63 of 210], and 37% [99 of 265] for patients with improving, persisting, and worsening acute respiratory distress syndrome, respectively), and the median (interquartile range) duration of mechanical ventilation was 7 (4, 14) days (specifically 3 [2, 5], 7 [4, 14], and 11 [6, 18] days for patients with improving, persisting, and worsening acute respiratory distress syndrome, respectively). Admissions for trauma or pneumonia, higher nonpulmonary sequential organ failure assessment score, lower partial pressure of alveolar oxygen/fraction of inspired oxygen, and higher peak inspiratory pressure were independently associated with worsening. CONCLUSIONS: Most patients with initial mild acute respiratory distress syndrome continue to fulfill acute respiratory distress syndrome criteria in the first week, and nearly half worsen in severity. Their mortality is high, particularly in patients with worsening acute respiratory distress syndrome, emphasizing the need for close attention to this patient population.status: publishe