18 research outputs found

    MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images

    Full text link
    Multi-modal fusion approaches aim to integrate information from different data sources. Unlike natural datasets, such as in audio-visual applications, where samples consist of "paired" modalities, data in healthcare is often collected asynchronously. Hence, requiring the presence of all modalities for a given sample is not realistic for clinical tasks and significantly limits the size of the dataset during training. In this paper, we propose MedFuse, a conceptually simple yet promising LSTM-based fusion module that can accommodate uni-modal as well as multi-modal input. We evaluate the fusion method and introduce new benchmark results for in-hospital mortality prediction and phenotype classification, using clinical time-series data in the MIMIC-IV dataset and corresponding chest X-ray images in MIMIC-CXR. Compared to more complex multi-modal fusion strategies, MedFuse provides a performance improvement by a large margin on the fully paired test set. It also remains robust across the partially paired test set containing samples with missing chest X-ray images. We release our code for reproducibility and to enable the evaluation of competing models in the future

    Privacy-preserving machine learning for healthcare: open challenges and future perspectives

    Full text link
    Machine Learning (ML) has recently shown tremendous success in modeling various healthcare prediction tasks, ranging from disease diagnosis and prognosis to patient treatment. Due to the sensitive nature of medical data, privacy must be considered along the entire ML pipeline, from model training to inference. In this paper, we conduct a review of recent literature concerning Privacy-Preserving Machine Learning (PPML) for healthcare. We primarily focus on privacy-preserving training and inference-as-a-service, and perform a comprehensive review of existing trends, identify challenges, and discuss opportunities for future research directions. The aim of this review is to guide the development of private and efficient ML models in healthcare, with the prospects of translating research efforts into real-world settings.Comment: ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare (TML4H

    Leveraging Transformers to Improve Breast Cancer Classification and Risk Assessment with Multi-modal and Longitudinal Data

    Full text link
    Breast cancer screening, primarily conducted through mammography, is often supplemented with ultrasound for women with dense breast tissue. However, existing deep learning models analyze each modality independently, missing opportunities to integrate information across imaging modalities and time. In this study, we present Multi-modal Transformer (MMT), a neural network that utilizes mammography and ultrasound synergistically, to identify patients who currently have cancer and estimate the risk of future cancer for patients who are currently cancer-free. MMT aggregates multi-modal data through self-attention and tracks temporal tissue changes by comparing current exams to prior imaging. Trained on 1.3 million exams, MMT achieves an AUROC of 0.943 in detecting existing cancers, surpassing strong uni-modal baselines. For 5-year risk prediction, MMT attains an AUROC of 0.826, outperforming prior mammography-based risk models. Our research highlights the value of multi-modal and longitudinal imaging in cancer diagnosis and risk stratification.Comment: ML4H 2023 Findings Trac

    DeepAMR for predicting co-occurrent resistance of Mycobacterium tuberculosis.

    Get PDF
    MOTIVATION: Resistance co-occurrence within first-line anti-tuberculosis (TB) drugs is a common phenomenon. Existing methods based on genetic data analysis of Mycobacterium tuberculosis (MTB) have been able to predict resistance of MTB to individual drugs, but have not considered the resistance co-occurrence and cannot capture latent structure of genomic data that corresponds to lineages. RESULTS: We used a large cohort of TB patients from 16 countries across six continents where whole-genome sequences for each isolate and associated phenotype to anti-TB drugs were obtained using drug susceptibility testing recommended by the World Health Organization. We then proposed an end-to-end multi-task model with deep denoising auto-encoder (DeepAMR) for multiple drug classification and developed DeepAMR_cluster, a clustering variant based on DeepAMR, for learning clusters in latent space of the data. The results showed that DeepAMR outperformed baseline model and four machine learning models with mean AUROC from 94.4% to 98.7% for predicting resistance to four first-line drugs [i.e. isoniazid (INH), ethambutol (EMB), rifampicin (RIF), pyrazinamide (PZA)], multi-drug resistant TB (MDR-TB) and pan-susceptible TB (PANS-TB: MTB that is susceptible to all four first-line anti-TB drugs). In the case of INH, EMB, PZA and MDR-TB, DeepAMR achieved its best mean sensitivity of 94.3%, 91.5%, 87.3% and 96.3%, respectively. While in the case of RIF and PANS-TB, it generated 94.2% and 92.2% sensitivity, which were lower than baseline model by 0.7% and 1.9%, respectively. t-SNE visualization shows that DeepAMR_cluster captures lineage-related clusters in the latent space. AVAILABILITY AND IMPLEMENTATION: The details of source code are provided at http://www.robots.ox.ac.uk/?davidc/code.php. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online

    Early warning score adjusted for age to predict the composite outcome of mortality, cardiac arrest or unplanned intensive care unit admission using observational vital-sign data: a multicentre development and validation

    Get PDF
    Objectives Early warning scores (EWS) alerting for in-hospital deterioration are commonly developed using routinely collected vital-sign data from the whole in-hospital population. As these in-hospital populations are dominated by those over the age of 45 years, resultant scores may perform less well in younger age groups. We developed and validated an age-specific early warning score (ASEWS) derived from statistical distributions of vital signs. Design Observational cohort study. Setting Oxford University Hospitals (OUH) July 2013 to March 2018 and Portsmouth Hospitals (PH) NHS Trust January 2010 to March 2017 within the Hospital Alerting Via Electronic Noticeboard database. Participants Hospitalised patients with electronically documented vital-sign observations Outcome Composite outcome of unplanned intensive care unit admission, mortality and cardiac arrest. Methods and results Statistical distributions of vital signs were used to develop an ASEWS to predict the composite outcome within 24 hours. The OUH development set consisted of 2 538 099 vital-sign observation sets from 142 806 admissions (mean age (SD): 59.8 (20.3)). We compared the performance of ASEWS to the National Early Warning Score (NEWS) and our previous EWS (MCEWS) on an OUH validation set consisting of 581 571 observation sets from 25 407 emergency admissions (mean age (SD): 63.0 (21.4)) and a PH validation set consisting of 5 865 997 observation sets from 233 632 emergency admissions (mean age (SD): 64.3 (21.1)). ASEWS performed better in the 16–45 years age group in the OUH validation set (AUROC 0.820 (95% CI 0.815 to 0.824)) and PH validation set (AUROC 0.840 (95% CI 0.839 to 0.841)) than NEWS (AUROC 0.763 (95% CI 0.758 to 0.768) and AUROC 0.836 (95% CI 0.835 to 0.838) respectively) and MCEWS (AUROC 0.808 (95% CI 0.803 to 0.812) and AUROC 0.833 (95% CI 0.831 to 0.834) respectively). Differences in performance were not consistent in the elder age group. Conclusions Accounting for age-related vital sign changes can more accurately detect deterioration in younger patients

    Deep learning for deterioration prediction of COVID-19 patients based on time-series of three vital signs

    No full text
    Abstract Unrecognized deterioration of COVID-19 patients can lead to high morbidity and mortality. Most existing deterioration prediction models require a large number of clinical information, typically collected in hospital settings, such as medical images or comprehensive laboratory tests. This is infeasible for telehealth solutions and highlights a gap in deterioration prediction models based on minimal data, which can be recorded at a large scale in any clinic, nursing home, or even at the patient’s home. In this study, we develop and compare two prognostic models that predict if a patient will experience deterioration in the forthcoming 3 to 24 h. The models sequentially process routine triadic vital signs: (a) oxygen saturation, (b) heart rate, and (c) temperature. These models are also provided with basic patient information, including sex, age, vaccination status, vaccination date, and status of obesity, hypertension, or diabetes. The difference between the two models is the way that the temporal dynamics of the vital signs are processed. Model #1 utilizes a temporally-dilated version of the Long-Short Term Memory model (LSTM) for temporal processes, and Model #2 utilizes a residual temporal convolutional network (TCN) for this purpose. We train and evaluate the models using data collected from 37,006 COVID-19 patients at NYU Langone Health in New York, USA. The convolution-based model outperforms the LSTM based model, achieving a high AUROC of 0.8844–0.9336 for 3 to 24 h deterioration prediction on a held-out test set. We also conduct occlusion experiments to evaluate the importance of each input feature, which reveals the significance of continuously monitoring the variation of the vital signs. Our results show the prospect for accurate deterioration forecast using a minimum feature set that can be relatively easily obtained using wearable devices and self-reported patient information
    corecore