465 research outputs found
Machine learning approaches in medical image analysis: From detection to diagnosis
Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results
On the dice loss gradient and the ways to mimic it
In the past few years, in the context of fully-supervised semantic
segmentation, several losses -- such as cross-entropy and dice -- have emerged
as de facto standards to supervise neural networks. The Dice loss is an
interesting case, as it comes from the relaxation of the popular Dice
coefficient; one of the main evaluation metric in medical imaging applications.
In this paper, we first study theoretically the gradient of the dice loss,
showing that concretely it is a weighted negative of the ground truth, with a
very small dynamic range. This enables us, in the second part of this paper, to
mimic the supervision of the dice loss, through a simple element-wise
multiplication of the network output with a negative of the ground truth. This
rather surprising result sheds light on the practical supervision performed by
the dice loss during gradient descent. This can help the practitioner to
understand and interpret results while guiding researchers when designing new
losses.Comment: Currently under revie
Extracting Tree-structures in CT data by Tracking Multiple Statistically Ranked Hypotheses
In this work, we adapt a method based on multiple hypothesis tracking (MHT)
that has been shown to give state-of-the-art vessel segmentation results in
interactive settings, for the purpose of extracting trees. Regularly spaced
tubular templates are fit to image data forming local hypotheses. These local
hypotheses are used to construct the MHT tree, which is then traversed to make
segmentation decisions. However, some critical parameters in this method are
scale-dependent and have an adverse effect when tracking structures of varying
dimensions. We propose to use statistical ranking of local hypotheses in
constructing the MHT tree, which yields a probabilistic interpretation of
scores across scales and helps alleviate the scale-dependence of MHT
parameters. This enables our method to track trees starting from a single seed
point. Our method is evaluated on chest CT data to extract airway trees and
coronary arteries. In both cases, we show that our method performs
significantly better than the original MHT method.Comment: Accepted for publication at the International Journal of Medical
Physics and Practic
Learning Cross-Modality Representations from Multi-Modal Images
Machine learning algorithms can have difficulties adapting to data from different sources, for example from different imaging modalities. We present and analyze three techniques for unsupervised cross-modality feature learning, using a shared autoencoder-like convolutional network that learns a common representation from multi-modal data. We investigate a form of feature normalization, a learning objective that minimizes crossmodality differences, and modality dropout, in which the network is trained with varying subsets of modalities. We measure the same-modality and cross-modality classification accuracies and explore whether the models learn modality-specific or shared features. This paper presents experiments on two public datasets, with knee images from two MRI modalities, provided by the Osteoarthritis Initiative, and brain tumor segmentation on four MRI modalities from the BRATS challenge. All three approaches improved the cross-modality classification accuracy, with modality dropout and per-feature normalization giving the largest improvement. We observed that the networks tend to learn a combination of cross-modality and modality-specific features. Overall, a combination of all three methods produced the most cross-modality features and the highest cross-modality classification accuracy, while maintaining most of the same-modality accuracy
Why Does Synthesized Data Improve Multi-sequence Classification?
The classification and registration of incomplete multi-modal medical images, such as multi-sequence MRI with missing sequences, can sometimes be improved by replacing the missing modalities with synthetic data. This may seem counter-intuitive: synthetic data is derived from data that is already available, so it does not add new information. Why can it still improve performance? In this paper we discuss possible explanations. If the synthesis model is more flexible than the classifier, the synthesis model can provide features that the classifier could not have extracted from the original data. In addition, using synthetic information to complete incomplete samples increases the size of the training set.
We present experiments with two classifiers, linear support vector machines (SVMs) and random forests, together with two synthesis methods that can replace missing data in an image classification problem: neural networks and restricted Boltzmann machines (RBMs). We used data from the BRATS 2013 brain tumor segmentation challenge, which includes multi-modal MRI scans with T1, T1 post-contrast, T2 and FLAIR sequences. The linear SVMs appear to benefit from the complex transformations offered by the synthesis models, whereas the random forests mostly benefit from having more training data. Training on the hidden representation from the RBM brought the accuracy of the linear SVMs close to that of random forests
Preface
Information Processing in Medical Imaging 28th International Conference, IPMI 2023San Carlos de Bariloche, Argentina, June 18–23, 2023 Proceeding
- …