33 research outputs found

    Defining sepsis on the wards: results of a multi-centre point-prevalence study comparing two sepsis definitions

    Get PDF
    Our aim was to prospectively determine the predictive capabilities of SEPSIS-1 and SEPSIS-3 definitions in the emergency departments and general wards. Patients with National Early Warning Score (NEWS) of 3 or above and suspected or proven infection were enrolled over a 24-h period in 13 Welsh hospitals. The primary outcome measure was mortality within 30 days. Out of the 5422 patients screened, 431 fulfilled inclusion criteria and 380 (88%) were recruited. Using the SEPSIS-1 definition, 212 patients had sepsis. When using the SEPSIS-3 definitions with Sequential Organ Failure Assessment (SOFA) score ≥ 2, there were 272 septic patients, whereas with quickSOFA score ≥ 2, 50 patients were identified. For the prediction of primary outcome, SEPSIS-1 criteria had a sensitivity (95%CI) of 65% (54–75%) and specificity of 47% (41–53%); SEPSIS-3 criteria had a sensitivity of 86% (76–92%) and specificity of 32% (27–38%). SEPSIS-3 and SEPSIS-1 definitions were associated with a hazard ratio (95%CI) 2.7 (1.5–5.6) and 1.6 (1.3–2.5), respectively. Scoring system discrimination evaluated by receiver operating characteristic curves was highest for Sequential Organ Failure Assessment score (0.69 (95%CI 0.63–0.76)), followed by NEWS (0.58 (0.51–0.66)) (p < 0.001). Systemic inflammatory response syndrome criteria (0.55 (0.49–0.61)) and quickSOFA score (0.56 (0.49–0.64)) could not predict outcome. The SEPSIS-3 definition identified patients with the highest risk. Sequential Organ Failure Assessment score and NEWS were better predictors of poor outcome. The Sequential Organ Failure Assessment score appeared to be the best tool for identifying patients with high risk of death and sepsis-induced organ dysfunction

    Use of anticoagulants and antiplatelet agents in stable outpatients with coronary artery disease and atrial fibrillation. International CLARIFY registry

    Get PDF

    Comparative performances of machine learning methods for classifying Crohn Disease patients using genome-wide genotyping data

    Get PDF
    Abstract: Crohn Disease (CD) is a complex genetic disorder for which more than 140 genes have been identified using genome wide association studies (GWAS). However, the genetic architecture of the trait remains largely unknown. The recent development of machine learning (ML) approaches incited us to apply them to classify healthy and diseased people according to their genomic information. The Immunochip dataset containing 18,227 CD patients and 34,050 healthy controls enrolled and genotyped by the international Inflammatory Bowel Disease genetic consortium (IIBDGC) has been re-analyzed using a set of ML methods: penalized logistic regression (LR), gradient boosted trees (GBT) and artificial neural networks (NN). The main score used to compare the methods was the Area Under the ROC Curve (AUC) statistics. The impact of quality control (QC), imputing and coding methods on LR results showed that QC methods and imputation of missing genotypes may artificially increase the scores. At the opposite, neither the patient/control ratio nor marker preselection or coding strategies significantly affected the results. LR methods, including Lasso, Ridge and ElasticNet provided similar results with a maximum AUC of 0.80. GBT methods like XGBoost, LightGBM and CatBoost, together with dense NN with one or more hidden layers, provided similar AUC values, suggesting limited epistatic effects in the genetic architecture of the trait. ML methods detected near all the genetic variants previously identified by GWAS among the best predictors plus additional predictors with lower effects. The robustness and complementarity of the different methods are also studied. Compared to LR, non-linear models such as GBT or NN may provide robust complementary approaches to identify and classify genetic markers
    corecore