24 research outputs found

    Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome : Insights from the LUNG SAFE study

    Get PDF
    Publisher Copyright: © 2020 The Author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ≥ 0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT02010073publishersversionPeer reviewe

    Immunocompromised patients with acute respiratory distress syndrome: Secondary analysis of the LUNG SAFE database

    Get PDF
    Background: The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p &lt; 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p &lt; 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013

    An immune-based biomarker signature is associated with mortality in COVID-19 patients

    No full text
    Immune and inflammatory responses to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) contribute to disease severity of coronavirus disease 2019 (COVID-19). However, the utility of specific immune-based biomarkers to predict clinical outcome remains elusive. Here, we analyzed levels of 66 soluble biomarkers in 175 Italian patients with COVID-19 ranging from mild/moderate to critical severity and assessed type I IFN–, type II IFN–, and NF-κB–dependent whole-blood transcriptional signatures. A broad inflammatory signature was observed, implicating activation of various immune and nonhematopoietic cell subsets. Discordance between IFN-α2a protein and IFNA2 transcript levels in blood suggests that type I IFNs during COVID-19 may be primarily produced by tissue-resident cells. Multivariable analysis of patients’ first samples revealed 12 biomarkers (CCL2, IL-15, soluble ST2 [sST2], NGAL, sTNFRSF1A, ferritin, IL-6, S100A9, MMP-9, IL-2, sVEGFR1, IL-10) that when increased were independently associated with mortality. Multivariate analyses of longitudinal biomarker trajectories identified 8 of the aforementioned biomarkers (IL-15, IL-2, NGAL, CCL2, MMP-9, sTNFRSF1A, sST2, IL-10) and 2 additional biomarkers (lactoferrin, CXCL9) that were substantially associated with mortality when increased, while IL-1α was associated with mortality when decreased. Among these, sST2, sTNFRSF1A, IL-10, and IL-15 were consistently higher throughout the hospitalization in patients who died versus those who recovered, suggesting that these biomarkers may provide an early warning of eventual disease outcome

    II. Algae: Non-planktonic

    No full text

    The AlpArray Seismic Network: A Large-Scale European Experiment to Image the Alpine Orogen

    Get PDF
    The AlpArray programme is a multinational, European consortium to advance our understanding of orogenesis and its relationship to mantle dynamics, plate reorganizations, surface processes and seismic hazard in the Alps-Apennines-Carpathians-Dinarides orogenic system. The AlpArray Seismic Network has been deployed with contributions from 36 institutions from 11 countries to map physical properties of the lithosphere and asthenosphere in 3D and thus to obtain new, high-resolution geophysical images of structures from the surface down to the base of the mantle transition zone. With over 600 broadband stations operated for 2 years, this seismic experiment is one of the largest simultaneously operated seismological networks in the academic domain, employing hexagonal coverage with station spacing at less than 52 km. This dense and regularly spaced experiment is made possible by the coordinated coeval deployment of temporary stations from numerous national pools, including ocean-bottom seismometers, which were funded by different national agencies. They combine with permanent networks, which also required the cooperation of many different operators. Together these stations ultimately fill coverage gaps. Following a short overview of previous large-scale seismological experiments in the Alpine region, we here present the goals, construction, deployment, characteristics and data management of the AlpArray Seismic Network, which will provide data that is expected to be unprecedented in quality to image the complex Alpine mountains at depth

    Using research to prepare for outbreaks of severe acute respiratory infection

    No full text
    Abstract Severe acute respiratory infections (SARI) remain one of the leading causes of mortality around the world in all age groups. There is large global variation in epidemiology, clinical management and outcomes, including mortality. We performed a short period observational data collection in critical care units distributed globally during regional peak SARI seasons from 1 January 2016 until 31 August 2017, using standardised data collection tools. Data were collected for 1 week on all admitted patients who met the inclusion criteria for SARI, with follow-up to hospital discharge. Proportions of patients across regions were compared for microbiology, management strategies and outcomes. Regions were divided geographically and economically according to World Bank definitions. Data were collected for 682 patients from 95 hospitals and 23 countries. The overall mortality was 9.5%. Of the patients, 21.7% were children, with case fatality proportions of 1% for those less than 5 years. The highest mortality was in those above 60 years, at 18.6%. Case fatality varied by region: East Asia and Pacific 10.2% (21 of 206), Sub-Saharan Africa 4.3% (8 of 188), South Asia 0% (0 of 35), North America 13.6% (25 of 184), and Europe and Central Asia 14.3% (9 of 63). Mortality in low-income and low-middle-income countries combined was 4% as compared with 14% in high-income countries. Organ dysfunction scores calculated on presentation in 560 patients where full data were available revealed Sequential Organ Failure Assessment (SOFA) scores on presentation were significantly associated with mortality and hospital length of stay. Patients in East Asia and Pacific (48%) and North America (24%) had the highest SOFA scores of &gt;12. Multivariable analysis demonstrated that initial SOFA score and age were independent predictors of hospital survival. There was variability across regions and income groupings for the critical care management and outcomes of SARI. Intensive care unit-specific factors, geography and management features were less reliable than baseline severity for predicting ultimate outcome. These findings may help in planning future outbreak severity assessments, but more globally representative data are required

    Weaning from mechanical ventilation in intensive care units across 50 countries (WEAN SAFE): a multicentre, prospective, observational cohort study

    No full text
    International audienceBackground: Current management practices and outcomes in weaning from invasive mechanical ventilation are poorly understood. We aimed to describe the epidemiology, management, timings, risk for failure, and outcomes of weaning in patients requiring at least 2 days of invasive mechanical ventilation. Methods: WEAN SAFE was an international, multicentre, prospective, observational cohort study done in 481 intensive care units in 50 countries. Eligible participants were older than 16 years, admitted to a participating intensive care unit, and receiving mechanical ventilation for 2 calendar days or longer. We defined weaning initiation as the first attempt to separate a patient from the ventilator, successful weaning as no reintubation or death within 7 days of extubation, and weaning eligibility criteria based on positive end-expiratory pressure, fractional concentration of oxygen in inspired air, and vasopressors. The primary outcome was the proportion of patients successfully weaned at 90 days. Key secondary outcomes included weaning duration, timing of weaning events, factors associated with weaning delay and weaning failure, and hospital outcomes. This study is registered with ClinicalTrials.gov, NCT03255109. Findings: Between Oct 4, 2017, and June 25, 2018, 10 232 patients were screened for eligibility, of whom 5869 were enrolled. 4523 (77·1%) patients underwent at least one separation attempt and 3817 (65·0%) patients were successfully weaned from ventilation at day 90. 237 (4·0%) patients were transferred before any separation attempt, 153 (2·6%) were transferred after at least one separation attempt and not successfully weaned, and 1662 (28·3%) died while invasively ventilated. The median time from fulfilling weaning eligibility criteria to first separation attempt was 1 day (IQR 0–4), and 1013 (22·4%) patients had a delay in initiating first separation of 5 or more days. Of the 4523 (77·1%) patients with separation attempts, 2927 (64·7%) had a short wean (≤1 day), 457 (10·1%) had intermediate weaning (2–6 days), 433 (9·6%) required prolonged weaning (≥7 days), and 706 (15·6%) had weaning failure. Higher sedation scores were independently associated with delayed initiation of weaning. Delayed initiation of weaning and higher sedation scores were independently associated with weaning failure. 1742 (31·8%) of 5479 patients died in the intensive care unit and 2095 (38·3%) of 5465 patients died in hospital. Interpretation: In critically ill patients receiving at least 2 days of invasive mechanical ventilation, only 65% were weaned at 90 days. A better understanding of factors that delay the weaning process, such as delays in weaning initiation or excessive sedation levels, might improve weaning success rates. Funding: European Society of Intensive Care Medicine, European Respiratory Society

    Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis

    No full text
    International audienceTwo acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS. Methods: In this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable. Findings: The primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90–0·95) in EARLI and 0·88 (0·84–0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81–0·94] vs 0·92 [0·88–0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group). Interpretation: Classifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated. Funding: US National Institutes of Health and European Society of Intensive Care Medicine
    corecore