83 research outputs found
MuST: Multimodal Spatiotemporal Graph-Transformer for Hospital Readmission Prediction
Hospital readmission prediction is considered an essential approach to
decreasing readmission rates, which is a key factor in assessing the quality
and efficacy of a healthcare system. Previous studies have extensively utilized
three primary modalities, namely electronic health records (EHR), medical
images, and clinical notes, to predict hospital readmissions. However, the
majority of these studies did not integrate information from all three
modalities or utilize the spatiotemporal relationships present in the dataset.
This study introduces a novel model called the Multimodal Spatiotemporal
Graph-Transformer (MuST) for predicting hospital readmissions. By employing
Graph Convolution Networks and temporal transformers, we can effectively
capture spatial and temporal dependencies in EHR and chest radiographs. We then
propose a fusion transformer to combine the spatiotemporal features from the
two modalities mentioned above with the features from clinical notes extracted
by a pre-trained, domain-specific transformer. We assess the effectiveness of
our methods using the latest publicly available dataset, MIMIC-IV. The
experimental results indicate that the inclusion of multimodal features in MuST
improves its performance in comparison to unimodal methods. Furthermore, our
proposed pipeline outperforms the current leading methods in the prediction of
hospital readmissions
Relabeling Minimal Training Subset to Flip a Prediction
When facing an unsatisfactory prediction from a machine learning model, it is
crucial to investigate the underlying reasons and explore the potential for
reversing the outcome. We ask: can we result in the flipping of a test
prediction by relabeling the smallest subset of the
training data before the model is trained? We propose an efficient procedure to
identify and relabel such a subset via an extended influence function. We find
that relabeling fewer than 1% of the training points can often flip the model's
prediction. This mechanism can serve multiple purposes: (1) providing an
approach to challenge a model prediction by recovering influential training
subsets; (2) evaluating model robustness with the cardinality of the subset
(i.e., ); we show that is highly related to
the noise ratio in the training set and is correlated with
but complementary to predicted probabilities; (3) revealing training points
lead to group attribution bias. To the best of our knowledge, we are the first
to investigate identifying and relabeling the minimal training subset required
to flip a given prediction.Comment: Under revie
Domain-incremental Cardiac Image Segmentation with Style-oriented Replay and Domain-sensitive Feature Whitening
Contemporary methods have shown promising results on cardiac image
segmentation, but merely in static learning, i.e., optimizing the network once
for all, ignoring potential needs for model updating. In real-world scenarios,
new data continues to be gathered from multiple institutions over time and new
demands keep growing to pursue more satisfying performance. The desired model
should incrementally learn from each incoming dataset and progressively update
with improved functionality as time goes by. As the datasets sequentially
delivered from multiple sites are normally heterogenous with domain
discrepancy, each updated model should not catastrophically forget previously
learned domains while well generalizing to currently arrived domains or even
unseen domains. In medical scenarios, this is particularly challenging as
accessing or storing past data is commonly not allowed due to data privacy. To
this end, we propose a novel domain-incremental learning framework to recover
past domain inputs first and then regularly replay them during model
optimization. Particularly, we first present a style-oriented replay module to
enable structure-realistic and memory-efficient reproduction of past data, and
then incorporate the replayed past data to jointly optimize the model with
current data to alleviate catastrophic forgetting. During optimization, we
additionally perform domain-sensitive feature whitening to suppress model's
dependency on features that are sensitive to domain changes (e.g.,
domain-distinctive style features) to assist domain-invariant feature
exploration and gradually improve the generalization performance of the
network. We have extensively evaluated our approach with the M&Ms Dataset in
single-domain and compound-domain incremental learning settings with improved
performance over other comparison approaches.Comment: Accepted to IEEE Transactions on Medical Imagin
- β¦