16 research outputs found

    Geoeconomic variations in epidemiology, ventilation management, and outcomes in invasively ventilated intensive care unit patients without acute respiratory distress syndrome: a pooled analysis of four observational studies

    Get PDF
    Background: Geoeconomic variations in epidemiology, the practice of ventilation, and outcome in invasively ventilated intensive care unit (ICU) patients without acute respiratory distress syndrome (ARDS) remain unexplored. In this analysis we aim to address these gaps using individual patient data of four large observational studies. Methods: In this pooled analysis we harmonised individual patient data from the ERICC, LUNG SAFE, PRoVENT, and PRoVENT-iMiC prospective observational studies, which were conducted from June, 2011, to December, 2018, in 534 ICUs in 54 countries. We used the 2016 World Bank classification to define two geoeconomic regions: middle-income countries (MICs) and high-income countries (HICs). ARDS was defined according to the Berlin criteria. Descriptive statistics were used to compare patients in MICs versus HICs. The primary outcome was the use of low tidal volume ventilation (LTVV) for the first 3 days of mechanical ventilation. Secondary outcomes were key ventilation parameters (tidal volume size, positive end-expiratory pressure, fraction of inspired oxygen, peak pressure, plateau pressure, driving pressure, and respiratory rate), patient characteristics, the risk for and actual development of acute respiratory distress syndrome after the first day of ventilation, duration of ventilation, ICU length of stay, and ICU mortality. Findings: Of the 7608 patients included in the original studies, this analysis included 3852 patients without ARDS, of whom 2345 were from MICs and 1507 were from HICs. Patients in MICs were younger, shorter and with a slightly lower body-mass index, more often had diabetes and active cancer, but less often chronic obstructive pulmonary disease and heart failure than patients from HICs. Sequential organ failure assessment scores were similar in MICs and HICs. Use of LTVV in MICs and HICs was comparable (42\ub74% vs 44\ub72%; absolute difference \u20131\ub769 [\u20139\ub758 to 6\ub711] p=0\ub767; data available in 3174 [82%] of 3852 patients). The median applied positive end expiratory pressure was lower in MICs than in HICs (5 [IQR 5\u20138] vs 6 [5\u20138] cm H2O; p=0\ub70011). ICU mortality was higher in MICs than in HICs (30\ub75% vs 19\ub79%; p=0\ub70004; adjusted effect 16\ub741% [95% CI 9\ub752\u201323\ub752]; p<0\ub70001) and was inversely associated with gross domestic product (adjusted odds ratio for a US$10 000 increase per capita 0\ub780 [95% CI 0\ub775\u20130\ub786]; p<0\ub70001). Interpretation: Despite similar disease severity and ventilation management, ICU mortality in patients without ARDS is higher in MICs than in HICs, with a strong association with country-level economic status. Funding: No funding

    Adopting a high-polyphenolic diet is associated with an improved glucose profile: prospective analysis within the PREDIMED-plus trial

    Get PDF
    Previous studies suggested that dietary polyphenols could reduce the incidence and complications of type-2 diabetes (T2D); although the evidence is still limited and inconsistent. This work analyzes whether changing to a diet with a higher polyphenolic content is associated with an improved glucose profile. At baseline, and at 1 year of follow-up visits, 5921 participants (mean age 65.0 ± 4.9, 48.2% women) who had overweight/obesity and metabolic syndrome filled out a validated 143-item semi-quantitative food frequency questionnaire (FFQ), from which polyphenol intakes were calculated. Energy-adjusted total polyphenols and subclasses were categorized in tertiles of changes. Linear mixed-effect models with random intercepts (the recruitment centers) were used to assess associations between changes in polyphenol subclasses intake and 1-year plasma glucose or glycosylated hemoglobin (HbA1c) levels. Increments in total polyphenol intake and some classes were inversely associated with better glucose levels and HbA1c after one year of follow-up. These associations were modified when the analyses were run considering diabetes status separately. To our knowledge, this is the first study to assess the relationship between changes in the intake of all polyphenolic groups and T2D-related parameters in a senior population with T2D or at high-risk of developing T2D

    Barcode tagging of human oocytes and embryos to prevent mix-ups in assisted reproduction technologies

    No full text
    STUDY QUESTION: Is the attachment of biofunctionalized polysilicon barcodes to the outer surface of the zona pellucida an effective approach for the direct tagging and identification of human oocytes and embryos during assisted reproduction technologies (ARTs)? SUMMARY ANSWER: The direct tagging system based on lectin-biofunctionalized polysilicon barcodes of micrometric dimensions is simple, safe and highly efficient, allowing the identification of human oocytes and embryos during the various procedures typically conducted during an assisted reproduction cycle. WHAT IS KNOWN ALREADY: Measures to prevent mismatching errors (mix-ups) of the reproductive samples are currently in place in fertility clinics, but none of them are totally effective and several mix-up cases have been reported worldwide. Using a mouse model, our group has previously developed an effective direct embryo tagging system which does not interfere with the in vitro and in vivo development of the tagged embryos. This system has now been tested in human oocytes and embryos. STUDY DESIGN, SIZE, DURATION: Fresh immature and mature fertilization-failed oocytes (n = 21) and cryopreserved day 1 embryos produced by in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) (n = 205) were donated by patients (n = 76) undergoing ARTs. In vitro development rates, embryo quality and post-vitrification survival were compared between tagged (n = 106) and non-tagged (control) embryos (n = 99). Barcode retention and identification rates were also calculated, both for embryos and for oocytes subjected to a simulated ICSI and parthenogenetic activation. Experiments were conducted from January 2012 to January 2013. PARTICIPANTS/ MATERIALS, SETTING, METHODS: Barcodes were fabricated in polysilicon and biofunctionalizated with wheat germ agglutinin lectin. Embryos were tagged with 10 barcodes and cultured in vitro until the blastocyst stage, when they were either differentially stained with propidium iodide and Hoechst or vitrified using the Cryotop method. Embryo quality was also analyzed by embryo grading and time-lapse monitoring. Injected oocytes were parthenogenetically activated using ionomycin and 6-dimethylaminopurine. MAIN RESULTS AND THE ROLE OF CHANCE: Blastocyst development rates of tagged (27/58) and non-tagged embryos (24/51) were equivalent, and no significant differences in the timing of key morphokinetic parameters and the number of inner cell mass cells were detected between the two groups (tagged: 24.7 ± 2.5; non-tagged: 22.3 ± 1.9), indicating that preimplantation embryo potential and quality are not affected by the barcodes. Similarly, re-expansion rates of vitrified-warmed tagged (19/21) and non-tagged (16/19) blastocysts were similar. Global identification rates of 96.9 and 89.5% were obtained in fresh (mean barcode retention: 9.22 ± 0.13) and vitrified-warmed (mean barcode retention: 7.79 ± 0.35) tagged embryos, respectively, when simulating an automatic barcode reading process, though these rates were increased to 100% just by rotating the embryos during barcode reading. Only one of the oocytes lost one barcode during intracytoplasmic injection (100% identification rate) and all oocytes retained all the barcodes after parthenogenetic activation. LIMITATIONS, REASONS FOR CAUTION: Although the direct embryo tagging system developed is effective, it only allows the identification and traceability of oocytes destined for ICSI and embryos. Thus, the traceability of all reproductive samples (oocytes destined for IVF and sperm) is not yet ensured. WIDER IMPLICATIONS OF THE FINDINGS: The direct embryo tagging system developed here provides fertility clinics with a novel tool to reduce the risk of mix-ups in human ARTs. The system can also be useful in research studies that require the individual identification of oocytes or embryos and their individual tracking. STUDY FUNDING/COMPETING INTEREST(S): This study was supported by the Sociedad Española de Fertilidad, the Spanish Ministry of Education and Science (TEC2011-29140-C03) and the Generalitat de Catalunya (2009SGR-00282 and 2009SGR-00158). The authors do not have any competing interests. © The Author 2013

    Global surveillance of cancer survival 1995–2009: analysis of individual data for 25 676 887 patients from 279 population-based registries in 67 countries (CONCORD-2)

    Get PDF
    Background: Worldwide data for cancer survival are scarce. We aimed to initiate worldwide surveillance of cancer survival by central analysis of population-based registry data, as a metric of the effectiveness of health systems, and to inform global policy on cancer control. Methods: Individual tumour records were submitted by 279 population-based cancer registries in 67 countries for 25·7 million adults (age 15–99 years) and 75 000 children (age 0–14 years) diagnosed with cancer during 1995–2009 and followed up to Dec 31, 2009, or later. We looked at cancers of the stomach, colon, rectum, liver, lung, breast (women), cervix, ovary, and prostate in adults, and adult and childhood leukaemia. Standardised quality control procedures were applied; errors were corrected by the registry concerned. We estimated 5-year net survival, adjusted for background mortality in every country or region by age (single year), sex, and calendar year, and by race or ethnic origin in some countries. Estimates were age-standardised with the International Cancer Survival Standard weights. Findings: 5-year survival from colon, rectal, and breast cancers has increased steadily in most developed countries. For patients diagnosed during 2005–09, survival for colon and rectal cancer reached 60% or more in 22 countries around the world; for breast cancer, 5-year survival rose to 85% or higher in 17 countries worldwide. Liver and lung cancer remain lethal in all nations: for both cancers, 5-year survival is below 20% everywhere in Europe, in the range 15–19% in North America, and as low as 7–9% in Mongolia and Thailand. Striking rises in 5-year survival from prostate cancer have occurred in many countries: survival rose by 10–20% between 1995–99 and 2005–09 in 22 countries in South America, Asia, and Europe, but survival still varies widely around the world, from less than 60% in Bulgaria and Thailand to 95% or more in Brazil, Puerto Rico, and the USA. For cervical cancer, national estimates of 5-year survival range from less than 50% to more than 70%; regional variations are much wider, and improvements between 1995–99 and 2005–09 have generally been slight. For women diagnosed with ovarian cancer in 2005–09, 5-year survival was 40% or higher only in Ecuador, the USA, and 17 countries in Asia and Europe. 5-year survival for stomach cancer in 2005–09 was high (54–58%) in Japan and South Korea, compared with less than 40% in other countries. By contrast, 5-year survival from adult leukaemia in Japan and South Korea (18–23%) is lower than in most other countries. 5-year survival from childhood acute lymphoblastic leukaemia is less than 60% in several countries, but as high as 90% in Canada and four European countries, which suggests major deficiencies in the management of a largely curable disease. Interpretation: International comparison of survival trends reveals very wide differences that are likely to be attributable to differences in access to early diagnosis and optimum treatment. Continuous worldwide surveillance of cancer survival should become an indispensable source of information for cancer patients and researchers and a stimulus for politicians to improve health policy and health-care systems

    Development and validation of a score to predict postoperative respiratory failure in a multicentre European cohort : A prospective, observational study

    No full text
    BACKGROUND Postoperative respiratory failure (PRF) is the most frequent respiratory complication following surgery. OBJECTIVE The objective of this study was to build a clinically useful predictive model for the development of PRF. DESIGN A prospective observational study of a multicentre cohort. SETTING Sixty-three hospitals across Europe. PATIENTS Patients undergoing any surgical procedure under general or regional anaesthesia during 7-day recruitment periods. MAIN OUTCOME MEASURES Development of PRF within 5 days of surgery. PRF was defined by a partial pressure of oxygen in arterial blood (PaO2) less than 8 kPa or new onset oxyhaemoglobin saturation measured by pulse oximetry (SpO(2)) less than 90% whilst breathing room air that required conventional oxygen therapy, noninvasive or invasive mechanical ventilation. RESULTS PRF developed in 224 patients (4.2% of the 5384 patients studied). In-hospital mortality [95% confidence interval (95% CI)] was higher in patients who developed PRF [10.3% (6.3 to 14.3) vs. 0.4% (0.2 to 0.6)]. Regression modelling identified a predictive PRF score that includes seven independent risk factors: low preoperative SpO(2); at least one preoperative respiratory symptom; preoperative chronic liver disease; history of congestive heart failure; open intrathoracic or upper abdominal surgery; surgical procedure lasting at least 2 h; and emergency surgery. The area under the receiver operating characteristic curve (c-statistic) was 0.82 (95% CI 0.79 to 0.85) and the Hosmer-Lemeshow goodness-of-fit statistic was 7.08 (P = 0.253). CONCLUSION A risk score based on seven objective, easily assessed factors was able to predict which patients would develop PRF. The score could potentially facilitate preoperative risk assessment and management and provide a basis for testing interventions to improve outcomes. The study was registered at ClinicalTrials.gov (identifier NCT01346709)

    Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis

    No full text
    Background: Two acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS. Methods: In this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable. Findings: The primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90–0·95) in EARLI and 0·88 (0·84–0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81–0·94] vs 0·92 [0·88–0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group). Interpretation: Classifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated. Funding: US National Institutes of Health and European Society of Intensive Care Medicine

    Hyperoxemia and excess oxygen use in early acute respiratory distress syndrome: Insights from the LUNG SAFE study

    No full text
    Background: Concerns exist regarding the prevalence and impact of unnecessary oxygen use in patients with acute respiratory distress syndrome (ARDS). We examined this issue in patients with ARDS enrolled in the Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE (LUNG SAFE) study. Methods: In this secondary analysis of the LUNG SAFE study, we wished to determine the prevalence and the outcomes associated with hyperoxemia on day 1, sustained hyperoxemia, and excessive oxygen use in patients with early ARDS. Patients who fulfilled criteria of ARDS on day 1 and day 2 of acute hypoxemic respiratory failure were categorized based on the presence of hyperoxemia (PaO2 > 100 mmHg) on day 1, sustained (i.e., present on day 1 and day 2) hyperoxemia, or excessive oxygen use (FIO2 ≥ 0.60 during hyperoxemia). Results: Of 2005 patients that met the inclusion criteria, 131 (6.5%) were hypoxemic (PaO2 < 55 mmHg), 607 (30%) had hyperoxemia on day 1, and 250 (12%) had sustained hyperoxemia. Excess FIO2 use occurred in 400 (66%) out of 607 patients with hyperoxemia. Excess FIO2 use decreased from day 1 to day 2 of ARDS, with most hyperoxemic patients on day 2 receiving relatively low FIO2. Multivariate analyses found no independent relationship between day 1 hyperoxemia, sustained hyperoxemia, or excess FIO2 use and adverse clinical outcomes. Mortality was 42% in patients with excess FIO2 use, compared to 39% in a propensity-matched sample of normoxemic (PaO2 55-100 mmHg) patients (P = 0.47). Conclusions: Hyperoxemia and excess oxygen use are both prevalent in early ARDS but are most often non-sustained. No relationship was found between hyperoxemia or excessive oxygen use and patient outcome in this cohort. Trial registration: LUNG-SAFE is registered with ClinicalTrials.gov, NCT0201007

    Death in hospital following ICU discharge: insights from the LUNG SAFE study

    No full text
    Background: To determine the frequency of, and factors associated with, death in hospital following ICU discharge to the ward. Methods: The Large observational study to UNderstand the Global impact of Severe Acute respiratory FailurE study was an international, multicenter, prospective cohort study of patients with severe respiratory failure, conducted across 459 ICUs from 50 countries globally. This study aimed to understand the frequency and factors associated with death in hospital in patients who survived their ICU stay. We examined outcomes in the subpopulation discharged with no limitations of life sustaining treatments (‘treatment limitations’), and the subpopulations with treatment limitations. Results: 2186 (94%) patients with no treatment limitations discharged from ICU survived, while 142 (6%) died in hospital. 118 (61%) of patients with treatment limitations survived while 77 (39%) patients died in hospital. Patients without treatment limitations that died in hospital after ICU discharge were older, more likely to have COPD, immunocompromise or chronic renal failure, less likely to have trauma as a risk factor for ARDS. Patients that died post ICU discharge were less likely to receive neuromuscular blockade, or to receive any adjunctive measure, and had a higher pre- ICU discharge non-pulmonary SOFA score. A similar pattern was seen in patients with treatment limitations that died in hospital following ICU discharge. Conclusions: A significant proportion of patients die in hospital following discharge from ICU, with higher mortality in patients with limitations of life-sustaining treatments in place. Non-survivors had higher systemic illness severity scores at ICU discharge than survivors. Trial Registration: ClinicalTrials.gov NCT02010073
    corecore