24 research outputs found
Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome : Insights from the LUNG SAFE study
Publisher Copyright: © 2020 The Author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ≥ 0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT02010073publishersversionPeer reviewe
Effects of norepinephrine on tissue perfusion in a sheep model of intra-abdominal hypertension
The aim of the study was to describe the effects of intra-abdominal hypertension (IAH) on regional and microcirculatory intestinal blood flow, renal blood flow, and urine output, as well as their response to increases in blood pressure induced by norepinephrine. This was a pilot, controlled study, performed in an animal research laboratory. Twenty-four anesthetized and mechanically ventilated sheep were studied. We measured systemic hemodynamics, superior mesenteric and renal blood flow, villi microcirculation, intramucosal-arterial PCO2, urine output, and intra-abdominal pressure. IAH (20 mm Hg) was generated by intraperitoneal instillation of warmed saline. After 1 h of IAH, sheep were randomized to IAH control (n = 8) or IAH norepinephrine (n = 8) groups, for 1 h. In this last group, mean arterial pressure was increased about 20 mm Hg with norepinephrine. A sham group (n = 8) was also studied. Fluids were administered to prevent decreases in cardiac output. Differences between groups were analyzed with two-way repeated measures of analysis of variance (ANOVA). After 2 h of IAH, abdominal perfusion pressure decreased in IAH control group compared to IAH norepinephrine and sham groups (49 ± 11, 73 ± 11, and 86 ± 15 mm Hg, P < 0.0001). There were no differences in superior mesenteric artery blood flow, intramucosal-arterial PCO2, and villi microcirculation among groups. Renal blood flow (49 ± 30, 32 ± 24, and 102 ± 45 mL.min(-1).kg(-1), P < 0.0001) and urinary output (0.3 ± 0.1, 0.2 ± 0.2, and 1.0 ± 0.6 mL.h(-1).kg(-1), P < 0.0001) were decreased in IAH control and IAH norepinephrine groups, compared to the sham group. In this experimental model of IAH, the gut and the kidney had contrasting responses: While intestinal blood flow and villi microcirculation remained unchanged, renal perfusion and urine output were severely compromise
Immunocompromised patients with acute respiratory distress syndrome: Secondary analysis of the LUNG SAFE database
Background: The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p < 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p < 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013
Recommended from our members
Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis
BackgroundTwo acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS.MethodsIn this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable.FindingsThe primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90-0·95) in EARLI and 0·88 (0·84-0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81-0·94] vs 0·92 [0·88-0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group).InterpretationClassifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated.FundingUS National Institutes of Health and European Society of Intensive Care Medicine
Evolution over Time of Ventilatory Management and Outcome of Patients with Neurologic Disease∗
OBJECTIVES: To describe the changes in ventilator management over time in patients with neurologic disease at ICU admission and to estimate factors associated with 28-day hospital mortality. DESIGN: Secondary analysis of three prospective, observational, multicenter studies. SETTING: Cohort studies conducted in 2004, 2010, and 2016. PATIENTS: Adult patients who received mechanical ventilation for more than 12 hours. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Among the 20,929 patients enrolled, we included 4,152 (20%) mechanically ventilated patients due to different neurologic diseases. Hemorrhagic stroke and brain trauma were the most common pathologies associated with the need for mechanical ventilation. Although volume-cycled ventilation remained the preferred ventilation mode, there was a significant (p < 0.001) increment in the use of pressure support ventilation. The proportion of patients receiving a protective lung ventilation strategy was increased over time: 47% in 2004, 63% in 2010, and 65% in 2016 (p < 0.001), as well as the duration of protective ventilation strategies: 406 days per 1,000 mechanical ventilation days in 2004, 523 days per 1,000 mechanical ventilation days in 2010, and 585 days per 1,000 mechanical ventilation days in 2016 (p < 0.001). There were no differences in the length of stay in the ICU, mortality in the ICU, and mortality in hospital from 2004 to 2016. Independent risk factors for 28-day mortality were age greater than 75 years, Simplified Acute Physiology Score II greater than 50, the occurrence of organ dysfunction within first 48 hours after brain injury, and specific neurologic diseases such as hemorrhagic stroke, ischemic stroke, and brain trauma. CONCLUSIONS: More lung-protective ventilatory strategies have been implemented over years in neurologic patients with no effect on pulmonary complications or on survival. We found several prognostic factors on mortality such as advanced age, the severity of the disease, organ dysfunctions, and the etiology of neurologic disease