14 research outputs found

    MRD dynamics during maintenance for improved prognostication of 1280 patients with myeloma in the TOURMALINE-MM3 and -MM4 trials

    No full text
    Measurable residual disease (MRD) evaluation may help to guide treatment duration in multiple myeloma (MM). Paradoxically, limited longitudinal data exist on MRD during maintenance. We investigated the prognostic value of MRD dynamics in 1280 transplant-eligible and -ineligible patients from the TOURMALINE-MM3 and -MM4 randomized placebo-controlled phase 3 studies of 2-year ixazomib maintenance. MRD status at randomization showed independent prognostic value (median progression-free survival [PFS], 38.6 vs 15.6 months in MRD− vs MRD+ patients; HR, 0.47). However, MRD dynamics during maintenance provided more detailed risk stratification. A 14-month landmark analysis showed prolonged PFS in patients converting from MRD+ to MRD− status vs those with persistent MRD+ status (76.8% vs 27.6% 2-year PFS rates). Prolonged PFS was observed in patients with sustained MRD− status vs those converting from MRD− to MRD+ status (75.0% vs 34.2% 2-year PFS rates). Similar results were observed at a 28-month landmark analysis. Ixazomib maintenance vs placebo improved PFS in patients who were MRD+ at randomization (median, 18.8 vs 11.6 months; HR, 0.65) or at the 14-month landmark (median, 16.8 vs 10.6 months; HR, 0.65); no difference was observed in patients who were MRD−. This is the largest MM population undergoing yearly MRD evaluation during maintenance reported to date. We demonstrate the limited prognostic value of a single–time point MRD evaluation, because MRD dynamics over time substantially impact PFS risk. These findings support MRD− status as a relevant end point during maintenance and confirm the increased progression risk in patients converting to MRD+ from MRD− status. These trials were registered at www.clinicaltrials.gov as #NCT02181413 and #NCT02312258

    Different outcomes for transplant-eligible newly diagnosed multiple myeloma patients in Latin America according to the public versus private management: a GELAMM study

    No full text
    The aim of this study was to describe clinical and survival characteristics of transplant-eligible multiple myeloma (MM) patients in Latin America (LA), with a special focus on differences between public and private healthcare facilities. We included 1293 patients diagnosed between 2010 and 2018. A great disparity in outcomes and survival between both groups was observed. Late diagnosis and low access to adequate frontline therapy and ASCT in public institutions probably explain these differences. Patients treated with novel drug induction protocols, followed by autologous stem cell transplantation (ASCT) and maintenance, have similar overall survival compared to that published internationally

    Mepolizumab for chronic rhinosinusitis with nasal polyps (SYNAPSE) : a randomised, double-blind, placebo-controlled, phase 3 trial

    No full text

    Antimicrobial Lessons From a Large Observational Cohort on Intra-abdominal Infections in Intensive Care Units

    No full text
    evere intra-abdominal infection commonly requires intensive care. Mortality is high and is mainly determined by disease-specific characteristics, i.e. setting of infection onset, anatomical barrier disruption, and severity of disease expression. Recent observations revealed that antimicrobial resistance appears equally common in community-acquired and late-onset hospital-acquired infection. This challenges basic principles in anti-infective therapy guidelines, including the paradigm that pathogens involved in community-acquired infection are covered by standard empiric antimicrobial regimens, and second, the concept of nosocomial acquisition as the main driver for resistance involvement. In this study, we report on resistance profiles of Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Enterococcus faecalis and Enterococcus faecium in distinct European geographic regions based on an observational cohort study on intra-abdominal infections in intensive care unit (ICU) patients. Resistance against aminopenicillins, fluoroquinolones, and third-generation cephalosporins in E. coli, K. pneumoniae and P. aeruginosa is problematic, as is carbapenem-resistance in the latter pathogen. For E. coli and K. pneumoniae, resistance is mainly an issue in Central Europe, Eastern and South-East Europe, and Southern Europe, while resistance in P. aeruginosa is additionally problematic in Western Europe. Vancomycin-resistance in E. faecalis is of lesser concern but requires vigilance in E. faecium in Central and Eastern and South-East Europe. In the subcohort of patients with secondary peritonitis presenting with either sepsis or septic shock, the appropriateness of empiric antimicrobial therapy was not associated with mortality. In contrast, failure of source control was strongly associated with mortality. The relevance of these new insights for future recommendations regarding empiric antimicrobial therapy in intra-abdominal infections is discussed.Severe intra-abdominal infection commonly requires intensive care. Mortality is high and is mainly determined by diseasespecific characteristics, i.e. setting of infection onset, anatomical barrier disruption, and severity of disease expression. Recent observations revealed that antimicrobial resistance appears equally common in community-acquired and late-onset hospital-acquired infection. This challenges basic principles in anti-infective therapy guidelines, including the paradigm that pathogens involved in community-acquired infection are covered by standard empiric antimicrobial regimens, and second, the concept of nosocomial acquisition as the main driver for resistance involvement. In this study, we report on resistance profiles of Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Enterococcus faecalis and Enterococcus faecium in distinct European geographic regions based on an observational cohort study on intra-abdominal infections in intensive care unit (ICU) patients. Resistance against aminopenicillins, fluoroquinolones, and third-generation cephalosporins in E. coli, K. pneumoniae and P. aeruginosa is problematic, as is carbapenem-resistance in the latter pathogen. For E. coli and K. pneumoniae, resistance is mainly an issue in Central Europe, Eastern and South-East Europe, and Southern Europe, while resistance in P. aeruginosa is additionally problematic in Western Europe. Vancomycin-resistance in E. faecalis is of lesser concern but requires vigilance in E. faecium in Central and Eastern and South-East Europe. In the subcohort of patients with secondary peritonitis presenting with either sepsis or septic shock, the appropriateness of empiric antimicrobial therapy was not associated with mortality. In contrast, failure of source control was strongly associated with mortality. The relevance of these new insights for future recommendations regarding empiric antimicrobial therapy in intra-abdominal infections is discussed

    Poor timing and failure of source control are risk factors for mortality in critically ill patients with secondary peritonitis

    No full text
    Purpose: To describe data on epidemiology, microbiology, clinical characteristics and outcome of adult patients admitted in the intensive care unit (ICU) with secondary peritonitis, with special emphasis on antimicrobial therapy and source control. Methods: Post hoc analysis of a multicenter observational study (Abdominal Sepsis Study, AbSeS) including 2621 adult ICU patients with intra-abdominal infection in 306 ICUs from 42 countries. Time-till-source control intervention was calculated as from time of diagnosis and classified into 'emergency' (< 2 h), 'urgent' (2-6 h), and 'delayed' (> 6 h). Relationships were assessed by logistic regression analysis and reported as odds ratios (OR) and 95% confidence interval (CI). Results: The cohort included 1077 cases of microbiologically confirmed secondary peritonitis. Mortality was 29.7%. The rate of appropriate empiric therapy showed no difference between survivors and non-survivors (66.4% vs. 61.3%, p = 0.1). A stepwise increase in mortality was observed with increasing Sequential Organ Failure Assessment (SOFA) scores (19.6% for a value ≤ 4-55.4% for a value > 12, p < 0.001). The highest odds of death were associated with septic shock (OR 3.08 [1.42-7.00]), late-onset hospital-acquired peritonitis (OR 1.71 [1.16-2.52]) and failed source control evidenced by persistent inflammation at day 7 (OR 5.71 [3.99-8.18]). Compared with 'emergency' source control intervention (< 2 h of diagnosis), 'urgent' source control was the only modifiable covariate associated with lower odds of mortality (OR 0.50 [0.34-0.73]). Conclusion: 'Urgent' and successful source control was associated with improved odds of survival. Appropriateness of empirical antimicrobial treatment did not significantly affect survival suggesting that source control is more determinative for outcome

    Epidemiology of intra-abdominal infection and sepsis in critically ill patients: "AbSeS", a multinational observational cohort study and ESICM Trials Group Project

    Get PDF
    Purpose To describe the epidemiology of intra-abdominal infection in an international cohort of ICU patients according to a new system that classifies cases according to setting of infection acquisition (community-acquired, early onset hospital-acquired, and late-onset hospital-acquired), anatomical disruption (absent or present with localized or diffuse peritonitis), and severity of disease expression (infection, sepsis, and septic shock). Methods We performed a multicenter (n = 309), observational, epidemiological study including adult ICU patients diagnosed with intra-abdominal infection. Risk factors for mortality were assessed by logistic regression analysis. Results The cohort included 2621 patients. Setting of infection acquisition was community-acquired in 31.6%, early onset hospital-acquired in 25%, and late-onset hospital-acquired in 43.4% of patients. Overall prevalence of antimicrobial resistance was 26.3% and difficult-to-treat resistant Gram-negative bacteria 4.3%, with great variation according to geographic region. No difference in prevalence of antimicrobial resistance was observed according to setting of infection acquisition. Overall mortality was 29.1%. Independent risk factors for mortality included late-onset hospital-acquired infection, diffuse peritonitis, sepsis, septic shock, older age, malnutrition, liver failure, congestive heart failure, antimicrobial resistance (either methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, extended-spectrum beta-lactamase-producing Gram-negative bacteria, or carbapenem-resistant Gram-negative bacteria) and source control failure evidenced by either the need for surgical revision or persistent inflammation. Conclusion This multinational, heterogeneous cohort of ICU patients with intra-abdominal infection revealed that setting of infection acquisition, anatomical disruption, and severity of disease expression are disease-specific phenotypic characteristics associated with outcome, irrespective of the type of infection. Antimicrobial resistance is equally common in community-acquired as in hospital-acquired infection

    Geoeconomic variations in epidemiology, ventilation management, and outcomes in invasively ventilated intensive care unit patients without acute respiratory distress syndrome: a pooled analysis of four observational studies

    No full text
    Background: Geoeconomic variations in epidemiology, the practice of ventilation, and outcome in invasively ventilated intensive care unit (ICU) patients without acute respiratory distress syndrome (ARDS) remain unexplored. In this analysis we aim to address these gaps using individual patient data of four large observational studies. Methods: In this pooled analysis we harmonised individual patient data from the ERICC, LUNG SAFE, PRoVENT, and PRoVENT-iMiC prospective observational studies, which were conducted from June, 2011, to December, 2018, in 534 ICUs in 54 countries. We used the 2016 World Bank classification to define two geoeconomic regions: middle-income countries (MICs) and high-income countries (HICs). ARDS was defined according to the Berlin criteria. Descriptive statistics were used to compare patients in MICs versus HICs. The primary outcome was the use of low tidal volume ventilation (LTVV) for the first 3 days of mechanical ventilation. Secondary outcomes were key ventilation parameters (tidal volume size, positive end-expiratory pressure, fraction of inspired oxygen, peak pressure, plateau pressure, driving pressure, and respiratory rate), patient characteristics, the risk for and actual development of acute respiratory distress syndrome after the first day of ventilation, duration of ventilation, ICU length of stay, and ICU mortality. Findings: Of the 7608 patients included in the original studies, this analysis included 3852 patients without ARDS, of whom 2345 were from MICs and 1507 were from HICs. Patients in MICs were younger, shorter and with a slightly lower body-mass index, more often had diabetes and active cancer, but less often chronic obstructive pulmonary disease and heart failure than patients from HICs. Sequential organ failure assessment scores were similar in MICs and HICs. Use of LTVV in MICs and HICs was comparable (42·4% vs 44·2%; absolute difference -1·69 [-9·58 to 6·11] p=0·67; data available in 3174 [82%] of 3852 patients). The median applied positive end expiratory pressure was lower in MICs than in HICs (5 [IQR 5-8] vs 6 [5-8] cm H2O; p=0·0011). ICU mortality was higher in MICs than in HICs (30·5% vs 19·9%; p=0·0004; adjusted effect 16·41% [95% CI 9·52-23·52]; p<0·0001) and was inversely associated with gross domestic product (adjusted odds ratio for a US$10 000 increase per capita 0·80 [95% CI 0·75-0·86]; p<0·0001). Interpretation: Despite similar disease severity and ventilation management, ICU mortality in patients without ARDS is higher in MICs than in HICs, with a strong association with country-level economic status

    Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis

    No full text
    International audienceTwo acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS. Methods: In this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable. Findings: The primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90–0·95) in EARLI and 0·88 (0·84–0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81–0·94] vs 0·92 [0·88–0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group). Interpretation: Classifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated. Funding: US National Institutes of Health and European Society of Intensive Care Medicine
    corecore