24 research outputs found

    Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome : Insights from the LUNG SAFE study

    Get PDF
    Publisher Copyright: © 2020 The Author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ≥ 0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT02010073publishersversionPeer reviewe

    Geoeconomic variations in epidemiology, ventilation management, and outcomes in invasively ventilated intensive care unit patients without acute respiratory distress syndrome: a pooled analysis of four observational studies

    Get PDF
    Background: Geoeconomic variations in epidemiology, the practice of ventilation, and outcome in invasively ventilated intensive care unit (ICU) patients without acute respiratory distress syndrome (ARDS) remain unexplored. In this analysis we aim to address these gaps using individual patient data of four large observational studies. Methods: In this pooled analysis we harmonised individual patient data from the ERICC, LUNG SAFE, PRoVENT, and PRoVENT-iMiC prospective observational studies, which were conducted from June, 2011, to December, 2018, in 534 ICUs in 54 countries. We used the 2016 World Bank classification to define two geoeconomic regions: middle-income countries (MICs) and high-income countries (HICs). ARDS was defined according to the Berlin criteria. Descriptive statistics were used to compare patients in MICs versus HICs. The primary outcome was the use of low tidal volume ventilation (LTVV) for the first 3 days of mechanical ventilation. Secondary outcomes were key ventilation parameters (tidal volume size, positive end-expiratory pressure, fraction of inspired oxygen, peak pressure, plateau pressure, driving pressure, and respiratory rate), patient characteristics, the risk for and actual development of acute respiratory distress syndrome after the first day of ventilation, duration of ventilation, ICU length of stay, and ICU mortality. Findings: Of the 7608 patients included in the original studies, this analysis included 3852 patients without ARDS, of whom 2345 were from MICs and 1507 were from HICs. Patients in MICs were younger, shorter and with a slightly lower body-mass index, more often had diabetes and active cancer, but less often chronic obstructive pulmonary disease and heart failure than patients from HICs. Sequential organ failure assessment scores were similar in MICs and HICs. Use of LTVV in MICs and HICs was comparable (42\ub74% vs 44\ub72%; absolute difference \u20131\ub769 [\u20139\ub758 to 6\ub711] p=0\ub767; data available in 3174 [82%] of 3852 patients). The median applied positive end expiratory pressure was lower in MICs than in HICs (5 [IQR 5\u20138] vs 6 [5\u20138] cm H2O; p=0\ub70011). ICU mortality was higher in MICs than in HICs (30\ub75% vs 19\ub79%; p=0\ub70004; adjusted effect 16\ub741% [95% CI 9\ub752\u201323\ub752]; p&lt;0\ub70001) and was inversely associated with gross domestic product (adjusted odds ratio for a US$10 000 increase per capita 0\ub780 [95% CI 0\ub775\u20130\ub786]; p&lt;0\ub70001). Interpretation: Despite similar disease severity and ventilation management, ICU mortality in patients without ARDS is higher in MICs than in HICs, with a strong association with country-level economic status. Funding: No funding

    Risco de infecção do sítio cirúrgico após colecistectomia laparoscópica comparado ao risco após colecistectomia laparotômica

    No full text
    Exportado OPUSMade available in DSpace on 2019-08-14T21:30:36Z (GMT). No. of bitstreams: 1 fernando_martin_biscione.pdf: 724689 bytes, checksum: b848781f4cd489f43e1b4c56fae303ef (MD5) Previous issue date: 6Introdução: existem poucos estudos comparativos com controles concorrentes avaliando o risco de infecção do sítio cirúrgico (ISC) em pacientes submetidos a colecistectomia laparoscópica (CL) ou laparotômica (CC).Objetivos: avaliar o impacto da via de abordagem e a contribuição das variáveis do componente cirúrgico NNIS (National Nosocomial Infections Surveillance) no risco global de ISC, infecção incisional e infecção de órgão/cavidade em pacientes colecistectomizados.Métodos: foi conduzido estudo de coorte histórica utilizando dados coletados entre janeiro de 1993 e maio de 2006 em cinco instituições de saúde (hospitais daqui em frente) de Belo Horizonte, Nova Lima e Contagem. Os hospitais participantes eram privados, de média ou alta complexidade e não universitários. A variável dependente foi o desenvolvimento de ISC até 30 dias após a cirurgia. As definições propostas pelos CDC (Centers for Disease Control and Prevention) em 1992 foram adotadas como critérios de definição de ISC. As ISC foram identificadas de forma prospectiva, tanto durante a permanência hospitalar do paciente quanto após a alta hospitalar. A variável de exposição foi a abordagem cirúrgica utilizada (i.e., laparoscópica vs. laparotômica). As variáveis independentes foram a idade e o sexo do paciente, o grau de contaminação do sítio cirúrgico, o estado físico do paciente segundo escore da American Society of Anesthesiologists (ASA), a duração do procedimento, a natureza da cirurgia (eletiva vs. urgente), o cirurgião principal, procedimentos adicionais através da mesma incisão, e o hospital e o ano ( 2000) da operação. A contribuição independente de cada variável na ocorrência de ISC foi avaliada utilizando-se análise de regressão logística binária.Resultados: 6.162 pacientes foram elegíveis, e dados completos estiveram disponíveis para 5.848 (94,9%) pacientes. A idade média + desvio padrão foi de 48,7 + 14,7 anos, e a razão de mulheres para homens foi 2,2:1; 59% das colecistectomias foram laparoscópicas. Em relação aos pacientes operados por laparotomia, os pacientes submetidos a CL foram mais jovens. As CL tiveram menor duração, porém neste grupo houve menor proporção de pacientes com escore da ASA > 3, de procedimentos urgentes, contaminados ou infectados, ou de procedimentos adicionais através da mesma incisão. Em pacientes submetidos a CL, a incidência global de ISC foi de 3,7% (IC 95%= 2,9-4,7%) [3,4% (IC 95%= 2,6-4,3%) para infecção incisional e 0,3% (IC 95%= 0,1-0,7%) para infecção de órgão/cavidade]. Para ambas as abordagens, a maior parte das infecções (> 80%) acometeu a parede abdominal. A contribuição independente das variáveis do componente cirúrgico NNIS no risco de ISC variou com a profundidade da infecção. Após o controle por outros fatores significativos, a chance global de ISC (OR= 0,62; IC 95%= 0,46-0,84) e de infecção incisional (OR= 0,56; IC 95%= 0,41-0,79) foi menor em pacientes submetidos a CL em relação aos submetidos a CC. Contrariamente, nenhuma diferença significativa na chance de desenvolvimento de infecção de órgão/cavidade foi demonstrada.Conclusões: comparada à CC, a CL está associada com menor risco global de ISC e de infecção incisional, mas não de infecção de órgão/cavidade. As variáveis do componente cirúrgico NNIS contribuíram de forma variável no risco de ISC.Background: few comparative studies with concurrent controls are available in the literature assessing the risk of surgical site infection (SSI) associated with the laparoscopic approach in patients undergoing cholecystectomy.Objectives: to assess the impact of the laparoscopic approach and the contribution of the NNIS (National Nosocomial Infections Surveillance) systems surgical component variables on the risk of overall SSI, incisional and organ/space infection in patients undergoing cholecystectomy.Methods: a historical cohort study was conducted using data collected from January 1993 through May 2006 in five healthcare facilities (hospitals hereafter) of Belo Horizonte, Nova Lima and Contagem. Participating hospitals are private, medium- to high-complexity, non-universitary centers. The outcome (i.e, dependent) variable was the development of an SSI within 30 days of the operation. The 1992 CDCs (Center for Disease Control and Prevention) criteria for SSI were adopted as case definition throughout the study. SSI were prospectively identified, both during hospital stay and after discharge. The exposure variable was the surgical approach used for cholecystectomy [i.e, laparoscopic (LC) vs. laparotomic (CC)]. Independent variables were age and gender of the patient, wound class, American Society of Anesthesiologists physical status (ASA-PS) classification, length of operation, type of surgery (elective vs. urgent), main surgeon, additional procedures though the same incision, and hospital and year ( 2000) of the operation. Binary logistic regression models were fit to assess the net effect of each independent variable on the odds of SSI.Results: 6.162 patients met eligibility criteria, and complete data were available for 5.848 (94,9%) patients. Mean age + SD was 48,7 + 14,7 years-old, and female-to-male ratio was 2,2:1; 59% of cholecystectomies were laparoscopic. As compared to CC, patients undergoing LC were younger and less likely to have an ASA-PS > 3, urgent procedures, contaminated or dirty procedures, or additional procedures though the same incision. LC were shorter in duration. In patients undergoing LC, overall SSI incidence was 3,7% (95% CI= 2,9-4,7%) [3,4% (95% CI= 2,6-4,3%) for incisional infections and 0,3% (95% CI= 0,1-0,7%) for organ/space infections]. For both LC and CC, most infections (> 80%) occurred at the incisions. The performance of the NNIS systems surgical component variables as predictors of SSI varied according to the depth of the infection. After controlling for other significant factors, the odds for overall SSI (OR= 0,62; 95% CI= 0,46-0,84) and incisional infection (OR= 0,56; 95% CI= 0,41-0,79) was lower in patients undergoing LC than in patients undergoing CC. Conversely, no significant reduction was demonstrated for organ/space infection.Conclusions: as compared to CC, LC is associated with a lower overall risk of SSI and incisional infection, but not organ/space infection. The NNIS systems surgical component variables performed variably as predictors of SSI

    Virologic and immunologic effectiveness of darunavir-based salvage therapy in HIV-1-infected adults in a Brazilian clinical practice setting: results of a multicenter and retrospective cohort study

    Get PDF
    Background:Darunavir has been proven efficacious for antiretroviral-experienced HIV-1-infected patients in randomized trials. However, effectiveness of darunavir-based salvage therapy is understudied in routine care in Brazil.Methods:Retrospective cohort study of HIV-1-infected patients from three public referral centers in Belo Horizonte, who received a darunavir-based therapy between 2008 and 2010, after virologic failure. Primary endpoint was the proportion of patients with viral load <50 copies/mL at week 48. Change in CD4 cell count was also evaluated. Outcome measures were analyzed on an intent-to-treat basis applied to observational studies. Sensitivity analysis was conducted to evaluate the impact of missing data at week 48. Predictors of virologic failure were examined using rare-event, finite sample, bias-corrected logistic regression.Results:Among 108 patients, the median age was 44.2 years, and 72.2% were male. They had long-standing HIV-1 infection (median 11.6 years) and advanced disease (76.9% had an AIDS-defining event). All patients had previously received protease inhibitors and nucleoside reverse transcriptase inhibitors, 75% nonnucleoside reverse transcriptase inhibitors, and 4.6% enfuvirtide. The median length of protease inhibitor use was 8.9 years, and 90.8% of patients had prior exposure to unboosted protease inhibitor. Genotypic resistance profile showed a median of three primary protease inhibitor mutations and 10.2% had three or more darunavir resistance-associated mutations. Virologic success at week 48 was achieved by 78.7% (95% CI = 69.7&#8211;86%) of patients and mean CD4 cell count increase from baseline was 131.5 cells/&#956;L (95% CI = 103.4&#8211;159.6). In multiple logistic regression analysis, higher baseline viral load (RR = 1.04 per 10,000 copies/mL increase; 95% CI = 1.01&#8211;1.09) and higher number of darunavir resistance-associated mutations (RR = 1.23 per each; 95% CI = 0.95&#8211;1.48) were independently associated with virologic failure.Conclusion:Virologic suppression is a realistic endpoint for most treatment-experienced patients who begin a darunavir-based therapy outside the controlled conditions of a randomized trial, at routine care settings

    Immunocompromised patients with acute respiratory distress syndrome: Secondary analysis of the LUNG SAFE database

    Get PDF
    Background: The aim of this study was to describe data on epidemiology, ventilatory management, and outcome of acute respiratory distress syndrome (ARDS) in immunocompromised patients. Methods: We performed a post hoc analysis on the cohort of immunocompromised patients enrolled in the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE) study. The LUNG SAFE study was an international, prospective study including hypoxemic patients in 459 ICUs from 50 countries across 5 continents. Results: Of 2813 patients with ARDS, 584 (20.8%) were immunocompromised, 38.9% of whom had an unspecified cause. Pneumonia, nonpulmonary sepsis, and noncardiogenic shock were their most common risk factors for ARDS. Hospital mortality was higher in immunocompromised than in immunocompetent patients (52.4% vs 36.2%; p &lt; 0.0001), despite similar severity of ARDS. Decisions regarding limiting life-sustaining measures were significantly more frequent in immunocompromised patients (27.1% vs 18.6%; p &lt; 0.0001). Use of noninvasive ventilation (NIV) as first-line treatment was higher in immunocompromised patients (20.9% vs 15.9%; p = 0.0048), and immunodeficiency remained independently associated with the use of NIV after adjustment for confounders. Forty-eight percent of the patients treated with NIV were intubated, and their mortality was not different from that of the patients invasively ventilated ab initio. Conclusions: Immunosuppression is frequent in patients with ARDS, and infections are the main risk factors for ARDS in these immunocompromised patients. Their management differs from that of immunocompetent patients, particularly the greater use of NIV as first-line ventilation strategy. Compared with immunocompetent subjects, they have higher mortality regardless of ARDS severity as well as a higher frequency of limitation of life-sustaining measures. Nonetheless, nearly half of these patients survive to hospital discharge. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013

    Correction to: Comparative effectiveness and safety of non-vitamin K antagonists for atrial fibrillation in clinical practice: GLORIA-AF Registry

    No full text
    International audienceIn this article, the name of the GLORIA-AF investigator Anastasios Kollias was given incorrectly as Athanasios Kollias in the Acknowledgements. The original article has been corrected

    Patterns of oral anticoagulant use and outcomes in Asian patients with atrial fibrillation: a post-hoc analysis from the GLORIA-AF Registry

    Get PDF
    Background: Previous studies suggested potential ethnic differences in the management and outcomes of atrial fibrillation (AF). We aim to analyse oral anticoagulant (OAC) prescription, discontinuation, and risk of adverse outcomes in Asian patients with AF, using data from a global prospective cohort study. Methods: From the GLORIA-AF Registry Phase II-III (November 2011-December 2014 for Phase II, and January 2014-December 2016 for Phase III), we analysed patients according to their self-reported ethnicity (Asian vs. non-Asian), as well as according to Asian subgroups (Chinese, Japanese, Korean and other Asian). Logistic regression was used to analyse OAC prescription, while the risk of OAC discontinuation and adverse outcomes were analysed through Cox-regression model. Our primary outcome was the composite of all-cause death and major adverse cardiovascular events (MACE). The original studies were registered with ClinicalTrials.gov, NCT01468701, NCT01671007, and NCT01937377. Findings: 34,421 patients were included (70.0&nbsp;±&nbsp;10.5 years, 45.1% females, 6900 (20.0%) Asian: 3829 (55.5%) Chinese, 814 (11.8%) Japanese, 1964 (28.5%) Korean and 293 (4.2%) other Asian). Most of the Asian patients were recruited in Asia (n&nbsp;=&nbsp;6701, 97.1%), while non-Asian patients were mainly recruited in Europe (n&nbsp;=&nbsp;15,449, 56.1%) and North America (n&nbsp;=&nbsp;8378, 30.4%). Compared to non-Asian individuals, prescription of OAC and non-vitamin K antagonist oral anticoagulant (NOAC) was lower in Asian patients (Odds Ratio [OR] and 95% Confidence Intervals (CI): 0.23 [0.22-0.25] and 0.66 [0.61-0.71], respectively), but higher in the Japanese subgroup. Asian ethnicity was also associated with higher risk of OAC discontinuation (Hazard Ratio [HR] and [95% CI]: 1.79 [1.67-1.92]), and lower risk of the primary composite outcome (HR [95% CI]: 0.86 [0.76-0.96]). Among the exploratory secondary outcomes, Asian ethnicity was associated with higher risks of thromboembolism and intracranial haemorrhage, and lower risk of major bleeding. Interpretation: Our results showed that Asian patients with AF showed suboptimal thromboembolic risk management and a specific risk profile of adverse outcomes; these differences may also reflect differences in country-specific factors. Ensuring integrated and appropriate treatment of these patients is crucial to improve their prognosis. Funding: The GLORIA-AF Registry was funded by Boehringer Ingelheim GmbH
    corecore