15 research outputs found

    Mortality Prediction Analysis among COVID-19 Inpatients Using Clinical Variables and Deep Learning Chest Radiography Imaging Features.

    Get PDF
    The emergence of the COVID-19 pandemic over a relatively brief interval illustrates the need for rapid data-driven approaches to facilitate clinical decision making. We examined a machine learning process to predict inpatient mortality among COVID-19 patients using clinical and chest radiographic data. Modeling was performed with a de-identified dataset of encounters prior to widespread vaccine availability. Non-imaging predictors included demographics, pre-admission clinical history, and past medical history variables. Imaging features were extracted from chest radiographs by applying a deep convolutional neural network with transfer learning. A multi-layer perceptron combining 64 deep learning features from chest radiographs with 98 patient clinical features was trained to predict mortality. The Local Interpretable Model-Agnostic Explanations (LIME) method was used to explain model predictions. Non-imaging data alone predicted mortality with an ROC-AUC of 0.87 ± 0.03 (mean ± SD), while the addition of imaging data improved prediction slightly (ROC-AUC: 0.91 ± 0.02). The application of LIME to the combined imaging and clinical model found HbA1c values to contribute the most to model prediction (17.1 ± 1.7%), while imaging contributed 8.8 ± 2.8%. Age, gender, and BMI contributed 8.7%, 8.2%, and 7.1%, respectively. Our findings demonstrate a viable explainable AI approach to quantify the contributions of imaging and clinical data to COVID mortality predictions

    Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients

    Get PDF
    Abstract—The paper's main contributions are twofold: to demonstrate how to apply the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.</p

    Influenza and pneumonia vaccination rates in patients hospitalized with acute respiratory failure

    No full text
    Background and method: Despite their proven effect, the rates of vaccinations are low. The aim of this study was to determine the rates and associated factors of influenza and pneumonia vaccinations in patients who were hospitalized because of acute respiratory failure. Patients hospitalized because of acute hypoxemic or hypercapnic respiratory failure were recruited for this retrospective study. A survey was conducted with 97 patients. Primary diagnoses, ages, reasons of hospitalizations, education status, vaccination rates, information resources, and thoughts about vaccinations were recorded. Results: In total 45 (46%) of the patients were female, and 52 (54%) were male. The mean age was 67 ± 12 years. The primary diagnoses were lung disorders (n = 77, 79%), cardiac disorders (n = 16, 17%), and neuromuscular disorders (n = 5, 4%). In total 72 (74%) patients had chronic obstructive pulmonary disease (COPD) with primary lung disorders. All patients were hospitalized due to acute respiratory failure. The main reason for acute respiratory failure was infection in 40 patients (42%). The overall influenza and pneumococcal vaccination rates were 26% and 15%, respectively; for patients with COPD it was 30% and 17%, respectively. The main providers of information were doctors (42%). Vaccination status was not associated with infections or other reasons of hospitalization, age, sex, educational status, and number of hospital admissions in the previous year. A total of 51 patients (52%) had no belief in the benefits of vaccinations. Conclusion: Vaccination rates were found to be low in patients who were frequently hospitalized. Vaccination status was not related with hospitalization due to infections and history of hospitalization; awareness of vaccinations should be improved both in doctors and patients

    KL Based Data Fusion for Target Tracking

    No full text
    Visual object tracking in video can be formulated as a time varying appearance-based binary classification problem. Tracking algorithms need to adapt to changes in both foreground object appearance as well as varying scene backgrounds. Fusing information from multimodal features (views or representations) typically enhances classification performance without increasing classifier complexity when image features are concatenated to form a high-dimensional vector. Combining these representative views to effectively exploit multimodal information for classification becomes a key issue. We show that the Kullback-Leibler (KL) divergence measure provides a framework that leads to family of techniques for fusing representations including Cher-noff distance and variance ratio that is the same as linear discriminant analysis. We provide experimental results that corroborate well with our theoretical analysis

    Visualization and Interpretation of Convolutional Neural Network Predictions in Detecting Pneumonia in Pediatric Chest Radiographs

    No full text
    Pneumonia affects 7% of the global population, resulting in 2 million pediatric deaths every year. Chest X-ray (CXR) analysis is routinely performed to diagnose the disease. Computer-aided diagnostic (CADx) tools aim to supplement decision-making. These tools process the handcrafted and/or convolutional neural network (CNN) extracted image features for visual recognition. However, CNNs are perceived as black boxes since their performance lack explanations. This is a serious bottleneck in applications involving medical screening/diagnosis since poorly interpreted model behavior could adversely affect the clinical decision. In this study, we evaluate, visualize, and explain the performance of customized CNNs to detect pneumonia and further differentiate between bacterial and viral types in pediatric CXRs. We present a novel visualization strategy to localize the region of interest (ROI) that is considered relevant for model predictions across all the inputs that belong to an expected class. We statistically validate the models&rsquo; performance toward the underlying tasks. We observe that the customized VGG16 model achieves 96.2% and 93.6% accuracy in detecting the disease and distinguishing between bacterial and viral pneumonia respectively. The model outperforms the state-of-the-art in all performance metrics and demonstrates reduced bias and improved generalization

    Atlas-Based Rib-Bone Detection In Chest X-Rays

    No full text
    This paper investigates using rib-bone atlases for automatic detection of rib-bones in chest X-rays (CXRs). We built a system that takes patient X-ray and model atlases as input and automatically computes the posterior rib borders with high accuracy and efficiency. In addition to conventional atlas, we propose two alternative atlases: (i) automatically computed rib bone models using Computed Tomography (CT) scans, and (ii) dual energy CXRs. We test the proposed approach with each model on 25 CXRs from the Japanese Society of Radiological Technology (JSRT) dataset and another 25 CXRs from the National Library of Medicine CXR dataset. We achieve an area under the ROC curve (AUC) of about 95% for Montgomery and 91% for JSRT datasets. Using the optimal operating point of the ROC curve, we achieve a segmentation accuracy of 88.91 ± 1.8% for Montgomery and 85.48 ± 3.3% for JSRT datasets. Our method produces comparable results with the state-of-the-art algorithms. The performance of our method is also excellent on challenging X-rays as it successfully addressed the rib-shape variance between patients and number of visible rib-bones due to patient respiration

    Mortality Prediction Analysis among COVID-19 Inpatients Using Clinical Variables and Deep Learning Chest Radiography Imaging Features

    No full text
    The emergence of the COVID-19 pandemic over a relatively brief interval illustrates the need for rapid data-driven approaches to facilitate clinical decision making. We examined a machine learning process to predict inpatient mortality among COVID-19 patients using clinical and chest radiographic data. Modeling was performed with a de-identified dataset of encounters prior to widespread vaccine availability. Non-imaging predictors included demographics, pre-admission clinical history, and past medical history variables. Imaging features were extracted from chest radiographs by applying a deep convolutional neural network with transfer learning. A multi-layer perceptron combining 64 deep learning features from chest radiographs with 98 patient clinical features was trained to predict mortality. The Local Interpretable Model-Agnostic Explanations (LIME) method was used to explain model predictions. Non-imaging data alone predicted mortality with an ROC-AUC of 0.87 &plusmn; 0.03 (mean &plusmn; SD), while the addition of imaging data improved prediction slightly (ROC-AUC: 0.91 &plusmn; 0.02). The application of LIME to the combined imaging and clinical model found HbA1c values to contribute the most to model prediction (17.1 &plusmn; 1.7%), while imaging contributed 8.8 &plusmn; 2.8%. Age, gender, and BMI contributed 8.7%, 8.2%, and 7.1%, respectively. Our findings demonstrate a viable explainable AI approach to quantify the contributions of imaging and clinical data to COVID mortality predictions
    corecore