9,734 research outputs found
Simultaneous Measurement Imputation and Outcome Prediction for Achilles Tendon Rupture Rehabilitation
Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries.
Rehabilitation after such a musculoskeletal injury remains a prolonged process
with a very variable outcome. Accurately predicting rehabilitation outcome is
crucial for treatment decision support. However, it is challenging to train an
automatic method for predicting the ATR rehabilitation outcome from treatment
data, due to a massive amount of missing entries in the data recorded from ATR
patients, as well as complex nonlinear relations between measurements and
outcomes. In this work, we design an end-to-end probabilistic framework to
impute missing data entries and predict rehabilitation outcomes simultaneously.
We evaluate our model on a real-life ATR clinical cohort, comparing with
various baselines. The proposed method demonstrates its clear superiority over
traditional methods which typically perform imputation and prediction in two
separate stages
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Quantifying the Performance of Explainability Algorithms
Given the complexity of the deep neural network (DNN), DNN has long been criticized for its lack of interpretability in its decision-making process. This 'black box' nature has been preventing the adaption of DNN in life-critical tasks. In recent years, there has been a surge of interest around the concept of artificial intelligence explainability/interpretability (XAI), where the goal is to produce an interpretation for a decision made by a DNN algorithm. While many explainability algorithms have been proposed for peaking into the decision-making process of DNN, there has been a limited exploration into the assessment of the performance of explainability methods, with most evaluations centred around subjective human visual perception of the produced interpretations. In this study, we explore a more objective strategy for quantifying the performance of explainability algorithms on DNNs. More specifically, we propose two quantitative performance metrics: i) \textbf{Impact Score} and ii) \textbf{Impact Coverage}. Impact Score assesses the percentage of critical factors with either strong confidence reduction impact or decision shifting impact. Impact Coverage accesses the percentage overlapping of adversarially impacted factors in the input. Furthermore, a comprehensive analysis using this approach was conducted on several explainability methods (LIME, SHAP, and Expected Gradients) on different task domains, such as visual perception, speech recognition and natural language processing (NLP). The empirical evidence suggests that there is significant room for improvement for all evaluated explainability methods. At the same time, the evidence also suggests that even the latest explainability methods can not produce steady better results across different task domains and different test scenarios
- …