118 research outputs found

    Ventilatory ratio : a simple bedside index to monitor ventilatory efficiency

    Get PDF
    A lack of a simple index that monitors ventilatory efficiency at the bedside has meant that oxygenation has been the predominant variable that is used to monitor adequacy of ventilatory strategies and disease severity in mechanically ventilated patients. Due to complexities in its measurement, deadspace ventilation, the traditional method to track ventilatory failure, has failed to become integral in the management of mechanically ventilated patients. Ventilatory ratio (VR) is an easy to calculate index that uses variables measured at the bedside: [Mathematical equation appears here. To view, please open pdf attachment] where [Symbols appears here. To view, please open pdf attachment] is taken to be 100 ml.kg-1.min-1 based on predicted body weight and [Symbols appears here. To view, please open pdf attachment] is taken to be 5 kPa. Physiological analysis of VR dictates that it is influenced by deadspace fraction and CO2 production. Physiological analysis of VR was validated in a benchside lung model and a high fidelity computational cardiopulmonary physiology model. The impact of CO2 production on VR was investigated in patients undergoing laparoscopic surgery who received exogenous intraperitoneal CO2. This showed that delta values of the 2 variables were linear. The variability of CO2 production was examined in ICU patients and results of the study showed that variability of CO2 production was small. In an ICU population correlation of VR was stronger with deadspace in comparison to CO2 production. Of these two variables, deadspace had the greater effect on VR. The clinical uses of VR were examined in 4 databases of ICU patients. VR was significantly higher in non-survivors compared to survivors. Higher values of VR were associated with increased mortality and more ventilator days. A rising values of VR over time was also associated with worse outcome. VR is a simple bedside index that provides clinicians with useful information regarding ventilatory efficiency and is associated with outcome

    Integrated host-microbe plasma metagenomics for sepsis diagnosis in a prospective cohort of critically ill adults

    Get PDF
    We carried out integrated host and pathogen metagenomic RNA and DNA next generation sequencing (mNGS) of whole blood (n = 221) and plasma (n = 138) from critically ill patients following hospital admission. We assigned patients into sepsis groups on the basis of clinical and microbiological criteria. From whole-blood gene expression data, we distinguished patients with sepsis from patients with non-infectious systemic inflammatory conditions using a trained bagged support vector machine (bSVM) classifier (area under the receiver operating characteristic curve (AUC) = 0.81 in the training set; AUC = 0.82 in a held-out validation set). Plasma RNA also yielded a transcriptional signature of sepsis with several genes previously reported as sepsis biomarkers, and a bSVM sepsis diagnostic classifier (AUC = 0.97 training set; AUC = 0.77 validation set). Pathogen detection performance of plasma mNGS varied on the basis of pathogen and site of infection. To improve detection of virus, we developed a secondary transcriptomic classifier (AUC = 0.94 training set; AUC = 0.96 validation set). We combined host and microbial features to develop an integrated sepsis diagnostic model that identified 99% of microbiologically confirmed sepsis cases, and predicted sepsis in 74% of suspected and 89% of indeterminate sepsis cases. In summary, we suggest that integrating host transcriptional profiling and broad-range metagenomic pathogen detection from nucleic acid is a promising tool for sepsis diagnosis

    Plasma SARS-CoV-2 nucleocapsid antigen levels are associated with progression to severe disease in hospitalized COVID-19

    Get PDF
    BACKGROUND: Studies quantifying SARS-CoV-2 have focused on upper respiratory tract or plasma viral RNA with inconsistent association with clinical outcomes. The association between plasma viral antigen levels and clinical outcomes has not been previously studied. Our aim was to investigate the relationship between plasma SARS-CoV-2 nucleocapsid antigen (N-antigen) concentration and both markers of host response and clinical outcomes. METHODS: SARS-CoV-2 N-antigen concentrations were measured in the first study plasma sample (D0), collected within 72 h of hospital admission, from 256 subjects admitted between March 2020 and August 2021 in a prospective observational cohort of hospitalized patients with COVID-19. The rank correlations between plasma N-antigen and plasma biomarkers of tissue damage, coagulation, and inflammation were assessed. Multiple ordinal regression was used to test the association between enrollment N-antigen plasma concentration and the primary outcome of clinical deterioration at one week as measured by a modified World Health Organization (WHO) ordinal scale. Multiple logistic regression was used to test the association between enrollment plasma N-antigen concentration and the secondary outcomes of ICU admission, mechanical ventilation at 28 days, and death at 28 days. The prognostic discrimination of an externally derived high antigen cutoff of N-antigen ≥ 1000 pg/mL was also tested. RESULTS: N-antigen on D0 was detectable in 84% of study participants. Plasma N-antigen levels significantly correlated with RAGE (r = 0.61), IL-10 (r = 0.59), and IP-10 (r = 0.59, adjusted p = 0.01 for all correlations). For the primary outcome of clinical status at one week, each 500 pg/mL increase in plasma N-antigen level was associated with an adjusted OR of 1.05 (95% CI 1.03-1.08) for worse WHO ordinal status. D0 plasma N-antigen ≥ 1000 pg/mL was 77% sensitive and 59% specific (AUROC 0.68) with a positive predictive value of 23% and a negative predictive value of 93% for a worse WHO ordinal scale at day 7 compared to baseline. D0 N-antigen concentration was independently associated with ICU admission and 28-day mechanical ventilation, but not with death at 28 days. CONCLUSIONS: Plasma N-antigen levels are readily measured and provide important insight into the pathogenesis and prognosis of COVID-19. The measurement of N-antigen levels early in-hospital course may improve risk stratification, especially for identifying patients who are unlikely to progress to severe disease

    Is Grad-CAM Explainable in Medical Images?

    Full text link
    Explainable Deep Learning has gained significant attention in the field of artificial intelligence (AI), particularly in domains such as medical imaging, where accurate and interpretable machine learning models are crucial for effective diagnosis and treatment planning. Grad-CAM is a baseline that highlights the most critical regions of an image used in a deep learning model's decision-making process, increasing interpretability and trust in the results. It is applied in many computer vision (CV) tasks such as classification and explanation. This study explores the principles of Explainable Deep Learning and its relevance to medical imaging, discusses various explainability techniques and their limitations, and examines medical imaging applications of Grad-CAM. The findings highlight the potential of Explainable Deep Learning and Grad-CAM in improving the accuracy and interpretability of deep learning models in medical imaging. The code is available in (will be available)

    Comparison of machine learning clustering algorithms for detecting heterogeneity of treatment effect in acute respiratory distress syndrome: A secondary analysis of three randomised controlled trials

    Get PDF
    BACKGROUND: Heterogeneity in Acute Respiratory Distress Syndrome (ARDS), as a consequence of its non-specific definition, has led to a multitude of negative randomised controlled trials (RCTs). Investigators have sought to identify heterogeneity of treatment effect (HTE) in RCTs using clustering algorithms. We evaluated the proficiency of several commonly-used machine-learning algorithms to identify clusters where HTE may be detected. METHODS: Five unsupervised: Latent class analysis (LCA), K-means, partition around medoids, hierarchical, and spectral clustering; and four supervised algorithms: model-based recursive partitioning, Causal Forest (CF), and X-learner with Random Forest (XL-RF) and Bayesian Additive Regression Trees were individually applied to three prior ARDS RCTs. Clinical data and research protein biomarkers were used as partitioning variables, with the latter excluded for secondary analyses. For a clustering schema, HTE was evaluated based on the interaction term of treatment group and cluster with day-90 mortality as the dependent variable. FINDINGS: No single algorithm identified clusters with significant HTE in all three trials. LCA, XL-RF, and CF identified HTE most frequently (2/3 RCTs). Important partitioning variables in the unsupervised approaches were consistent across algorithms and RCTs. In supervised models, important partitioning variables varied between algorithms and across RCTs. In algorithms where clusters demonstrated HTE in the same trial, patients frequently interchanged clusters from treatment-benefit to treatment-harm clusters across algorithms. LCA aside, results from all other algorithms were subject to significant alteration in cluster composition and HTE with random seed change. Removing research biomarkers as partitioning variables greatly reduced the chances of detecting HTE across all algorithms. INTERPRETATION: Machine-learning algorithms were inconsistent in their abilities to identify clusters with significant HTE. Protein biomarkers were essential in identifying clusters with HTE. Investigations using machine-learning approaches to identify clusters to seek HTE require cautious interpretation. FUNDING: NIGMS R35 GM142992 (PS), NHLBI R35 HL140026 (CSC); NIGMS R01 GM123193, Department of Defense W81XWH-21-1-0009, NIA R21 AG068720, NIDA R01 DA051464 (MMC)

    The clinical course of coronavirus disease 2019 in a US hospital system: A multistate analysis

    Get PDF
    There are limited data on longitudinal outcomes for coronavirus disease 2019 (COVID-19) hospitalizations that account for transitions between clinical states over time. Using electronic health record data from a hospital network in the St. Louis, Missouri, region, we performed multistate analyses to examine longitudinal transitions and outcomes among hospitalized adults with laboratory-confirmed COVID-19 with respect to 15 mutually exclusive clinical states. Between March 15 and July 25, 2020, a total of 1,577 patients in the network were hospitalized with COVID-19 (49.9% male; median age, 63 years (interquartile range, 50-75); 58.8% Black). Overall, 34.1% (95% confidence interval (CI): 26.4, 41.8) had an intensive care unit admission and 12.3% (95% CI: 8.5, 16.1) received invasive mechanical ventilation (IMV). The risk of decompensation peaked immediately after admission; discharges peaked around days 3-5, and deaths plateaued between days 7 and 16. At 28 days, 12.6% (95% CI: 9.6, 15.6) of patients had died (4.2% (95% CI: 3.2, 5.2) had received IMV) and 80.8% (95% CI: 75.4, 86.1) had been discharged. Among those receiving IMV, 35.1% (95% CI: 28.2, 42.0) remained intubated after 14 days; after 28 days, 37.6% (95% CI: 30.4, 44.7) had died and only 37.7% (95% CI: 30.6, 44.7) had been discharged. Multistate methods offer granular characterizations of the clinical course of COVID-19 and provide essential information for guiding both clinical decision-making and public health planning
    corecore